Generalized Notation Notation (GNN) Pipeline Output Summary

Table of Contents

GNN Discovery (Step 1)

GNN File Discovery Report

Processed 2 GNN file(s) from directory: src/gnn/examples Search pattern used: **/*.md

Overall Summary


Detailed File Analysis

File: src/gnn/examples/pymdp_pomdp_agent.md

Found Sections:


File: src/gnn/examples/rxinfer_multiagent_gnn.md

Found Sections:


GNN Type Checker (Step 4)

Type Check Report

GNN Type Checker Report

pymdp_pomdp_agent.md: ✅ VALID

Path: src/gnn/examples/pymdp_pomdp_agent.md

rxinfer_multiagent_gnn.md: ✅ VALID

Path: src/gnn/examples/rxinfer_multiagent_gnn.md

Checked 2 files, 2 valid, 0 invalid

Resource Estimates: resource_estimates

Images

Markdown Reports

resource_report.md

GNN Resource Estimation Report

Analyzed 2 files Average Memory Usage: 0.50 KB Average Inference Time: 218.62 units Average Storage: 5.29 KB

pymdp_pomdp_agent.md

Path: src/gnn/examples/pymdp_pomdp_agent.md Memory Estimate: 0.48 KB Inference Estimate: 154.07 units Storage Estimate: 3.83 KB

Model Info

  • variables_count: 21
  • edges_count: 2
  • time_spec: Dynamic
  • equation_count: 5

Complexity Metrics

  • state_space_complexity: 6.9658
  • graph_density: 0.0048
  • avg_in_degree: 1.0000
  • avg_out_degree: 1.0000
  • max_in_degree: 1.0000
  • max_out_degree: 1.0000
  • cyclic_complexity: 0.0000
  • temporal_complexity: 0.0000
  • equation_complexity: 8.7600
  • overall_complexity: 8.7413
  • variable_count: 21.0000
  • edge_count: 2.0000
  • total_state_space_dim: 124.0000
  • max_variable_dim: 27.0000

rxinfer_multiagent_gnn.md

Path: src/gnn/examples/rxinfer_multiagent_gnn.md Memory Estimate: 0.52 KB Inference Estimate: 283.16 units Storage Estimate: 6.76 KB

Model Info

  • variables_count: 60
  • edges_count: 1
  • time_spec: Dynamic
  • equation_count: 15

Complexity Metrics

  • state_space_complexity: 6.8202
  • graph_density: 0.0003
  • avg_in_degree: 1.0000
  • avg_out_degree: 1.0000
  • max_in_degree: 1.0000
  • max_out_degree: 1.0000
  • cyclic_complexity: 0.0000
  • temporal_complexity: 0.0000
  • equation_complexity: 3.2578
  • overall_complexity: 5.3649
  • variable_count: 60.0000
  • edge_count: 1.0000
  • total_state_space_dim: 112.0000
  • max_variable_dim: 16.0000

Metric Definitions

General Metrics

  • Memory Estimate (KB): Estimated RAM required to hold the model's variables and data structures in memory. Calculated based on variable dimensions and data types (e.g., float: 4 bytes, int: 4 bytes).
  • Inference Estimate (units): A relative, abstract measure of computational cost for a single inference pass. It is derived from factors like model type (Static, Dynamic, Hierarchical), the number and type of variables, the complexity of connections (edges), and the operations defined in equations. Higher values indicate a more computationally intensive model. These units are not tied to a specific hardware time (e.g., milliseconds) but allow for comparison between different GNN models.
  • Storage Estimate (KB): Estimated disk space required to store the model file. This includes the memory footprint of the data plus overhead for the GNN textual representation, metadata, comments, and equations.

Complexity Metrics (scores are generally relative; higher often means more complex)

  • state_space_complexity: Logarithmic measure of the total dimensionality of all variables (sum of the product of dimensions for each variable). Represents the model's theoretical information capacity or the size of its state space.
  • graph_density: Ratio of actual edges to the maximum possible edges in the model graph. A value of 0 indicates no connections, while 1 would mean a fully connected graph. Measures how interconnected the variables are.
  • avg_in_degree: Average number of incoming connections (edges) per variable.
  • avg_out_degree: Average number of outgoing connections (edges) per variable.
  • max_in_degree: Maximum number of incoming connections for any single variable in the model.
  • max_out_degree: Maximum number of outgoing connections for any single variable in the model.
  • cyclic_complexity: A score indicating the presence and extent of cyclic patterns or feedback loops in the graph. Approximated based on the ratio of edges to variables; higher values suggest more complex recurrent interactions.
  • temporal_complexity: Proportion of edges that involve time dependencies (e.g., connecting a variable at time t to one at t+1). Indicates the degree to which the model's behavior depends on past states or sequences.
  • equation_complexity: A measure based on the average length, number, and types of mathematical operators (e.g., +, *, log, softmax) used in the model's equations. Higher values suggest more intricate mathematical relationships between variables.
  • overall_complexity: A weighted composite score (typically scaled, e.g., 0-10) that combines state space size, graph structure (density, cyclicity), temporal aspects, and equation complexity to provide a single, holistic measure of the model's intricacy.

HTML Reports/Outputs

resource_report_detailed.html

View standalone: resource_report_detailed.html

JSON Files

resource_data.json

{
  "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md": {
    "file": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md",
    "model_name": "Multifactor PyMDP Agent v1",
    "memory_estimate": 0.484375,
    "inference_estimate": 154.06988264859797,
    "storage_estimate": 3.82846875,
    "flops_estimate": {
      "total_flops": 1050.0,
      "matrix_operations": 0,
      "element_operations": 0,
      "nonlinear_operations": 0
    },
    "inference_time_estimate": {
      "cpu_time_seconds": 2.1e-08,
      "cpu_time_ms": 2.1e-05,
      "cpu_time_us": 0.020999999999999998
    },
    "batched_inference_estimate": {
      "batch_1": {
        "flops": 1050.0,
        "time_seconds": 2.1e-08,
        "throughput_per_second": 47619047.61904762
      },
      "batch_8": {
        "flops": 6674.971489500035,
        "time_seconds": 1.334994297900007e-07,
        "throughput_per_second": 59925349.58826627
      },
      "batch_32": {
        "flops": 25518.25782075925,
        "time_seconds": 5.10365156415185e-07,
        "throughput_per_second": 62700205.13306323
      },
      "batch_128": {
        "flops": 99830.77636640746,
        "time_seconds": 1.9966155273281492e-06,
        "throughput_per_second": 64108486.710652955
      },
      "batch_512": {
        "flops": 394234.3967437306,
        "time_seconds": 7.884687934874611e-06,
        "throughput_per_second": 64935987.85760216
      }
    },
    "model_overhead": {
      "compilation_ms": 79,
      "optimization_ms": 240.5,
      "memory_overhead_kb": 2.572265625
    },
    "complexity": {
      "state_space_complexity": 6.965784284662087,
      "graph_density": 0.004761904761904762,
      "avg_in_degree": 1.0,
      "avg_out_degree": 1.0,
      "max_in_degree": 1,
      "max_out_degree": 1,
      "cyclic_complexity": 0,
      "temporal_complexity": 0.0,
      "equation_complexity": 8.76,
      "overall_complexity": 8.741273094711996,
      "variable_count": 21,
      "edge_count": 2,
      "total_state_space_dim": 124,
      "max_variable_dim": 27
    },
    "model_info": {
      "variables_count": 21,
      "edges_count": 2,
      "time_spec": "Dynamic",
      "equation_count": 5
    }
  },
  "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md": {
    "file": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md",
    "model_name": "Multi-agent Trajectory Planning",
    "memory_estimate": 0.5166015625,
    "inference_estimate": 283.1611446514433,
    "storage_estimate": 6.7573515625,
    "flops_estimate": {
      "total_flops": 20.0,
      "matrix_operations": 0,
      "element_operations": 8,
      "nonlinear_operations": 0
    },
    "inference_time_estimate": {
      "cpu_time_seconds": 4e-10,
      "cpu_time_ms": 4.0000000000000003e-07,
      "cpu_time_us": 0.0004
    },
    "batched_inference_estimate": {
      "batch_1": {
        "flops": 20.0,
        "time_seconds": 4e-10,
        "throughput_per_second": 2500000000.0
      },
      "batch_8": {
        "flops": 127.14231408571496,
        "time_seconds": 2.5428462817142993e-09,
        "throughput_per_second": 3146080853.383979
      },
      "batch_32": {
        "flops": 486.0620537287476,
        "time_seconds": 9.721241074574952e-09,
        "throughput_per_second": 3291760769.48582
      },
      "batch_128": {
        "flops": 1901.5385974553803,
        "time_seconds": 3.8030771949107605e-08,
        "throughput_per_second": 3365695552.30928
      },
      "batch_512": {
        "flops": 7509.226604642487,
        "time_seconds": 1.5018453209284973e-07,
        "throughput_per_second": 3409139362.5241137
      }
    },
    "model_overhead": {
      "compilation_ms": 206,
      "optimization_ms": 1820.0,
      "memory_overhead_kb": 5.423828125
    },
    "complexity": {
      "state_space_complexity": 6.820178962415188,
      "graph_density": 0.0002824858757062147,
      "avg_in_degree": 1.0,
      "avg_out_degree": 1.0,
      "max_in_degree": 1,
      "max_out_degree": 1,
      "cyclic_complexity": 0,
      "temporal_complexity": 0.0,
      "equation_complexity": 3.2577777777777777,
      "overall_complexity": 5.364897390812113,
      "variable_count": 60,
      "edge_count": 1,
      "total_state_space_dim": 112,
      "max_variable_dim": 16
    },
    "model_info": {
      "variables_count": 60,
      "edges_count": 1,
      "time_spec": "Dynamic",
      "equation_count": 15
    }
  }
}
resource_data.json

GNN Exports (Step 5)

Export Step Report

📤 GNN Export Step Summary

🗓️ Generated: 2025-06-06 13:08:09

⚙️ Configuration

📊 Export Statistics

Exports for pymdp_pomdp_agent: pymdp_pomdp_agent

JSON Files

pymdp_pomdp_agent.json

{
  "file_path": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md",
  "name": "Multifactor PyMDP Agent v1",
  "metadata": {
    "description": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example."
  },
  "states": [
    {
      "id": "A_m0",
      "dimensions": "3,2,3,type=float",
      "original_id": "A_m0"
    },
    {
      "id": "A_m1",
      "dimensions": "3,2,3,type=float",
      "original_id": "A_m1"
    },
    {
      "id": "A_m2",
      "dimensions": "3,2,3,type=float",
      "original_id": "A_m2"
    },
    {
      "id": "B_f0",
      "dimensions": "2,2,1,type=float",
      "original_id": "B_f0"
    },
    {
      "id": "B_f1",
      "dimensions": "3,3,3,type=float",
      "original_id": "B_f1"
    },
    {
      "id": "C_m0",
      "dimensions": "3,type=float",
      "original_id": "C_m0"
    },
    {
      "id": "C_m1",
      "dimensions": "3,type=float",
      "original_id": "C_m1"
    },
    {
      "id": "C_m2",
      "dimensions": "3,type=float",
      "original_id": "C_m2"
    },
    {
      "id": "D_f0",
      "dimensions": "2,type=float",
      "original_id": "D_f0"
    },
    {
      "id": "D_f1",
      "dimensions": "3,type=float",
      "original_id": "D_f1"
    },
    {
      "id": "s_f0",
      "dimensions": "2,1,type=float",
      "original_id": "s_f0"
    },
    {
      "id": "s_f1",
      "dimensions": "3,1,type=float",
      "original_id": "s_f1"
    },
    {
      "id": "s_prime_f0",
      "dimensions": "2,1,type=float",
      "original_id": "s_prime_f0"
    },
    {
      "id": "s_prime_f1",
      "dimensions": "3,1,type=float",
      "original_id": "s_prime_f1"
    },
    {
      "id": "o_m0",
      "dimensions": "3,1,type=float",
      "original_id": "o_m0"
    },
    {
      "id": "o_m1",
      "dimensions": "3,1,type=float",
      "original_id": "o_m1"
    },
    {
      "id": "o_m2",
      "dimensions": "3,1,type=float",
      "original_id": "o_m2"
    },
    {
      "id": "u_f1",
      "dimensions": "1,type=int",
      "original_id": "u_f1"
    },
    {
      "id": "G",
      "dimensions": "1,type=float",
      "original_id": "G"
    },
    {
      "id": "t",
      "dimensions": "1,type=int",
      "original_id": "t"
    }
  ],
  "parameters": {},
  "initial_parameters": {},
  "observations": [],
  "transitions": [
    {
      "sources": [
        "D_f0",
        "D_f1"
      ],
      "operator": "-",
      "targets": [
        "s_f0",
        "s_f1"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "s_f0",
        "s_f1"
      ],
      "operator": "-",
      "targets": [
        "A_m0",
        "A_m1",
        "A_m2"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "A_m0",
        "A_m1",
        "A_m2"
      ],
      "operator": "-",
      "targets": [
        "o_m0",
        "o_m1",
        "o_m2"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "B_f0",
        "B_f1"
      ],
      "operator": "-",
      "targets": [
        "s_prime_f0",
        "s_prime_f1"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "C_m0",
        "C_m1",
        "C_m2"
      ],
      "operator": ">",
      "targets": [
        "G"
      ],
      "attributes": {}
    }
  ],
  "ontology_annotations": {
    "A_m0": "LikelihoodMatrixModality0",
    "A_m1": "LikelihoodMatrixModality1",
    "A_m2": "LikelihoodMatrixModality2",
    "B_f0": "TransitionMatrixFactor0",
    "B_f1": "TransitionMatrixFactor1",
    "C_m0": "LogPreferenceVectorModality0",
    "C_m1": "LogPreferenceVectorModality1",
    "C_m2": "LogPreferenceVectorModality2",
    "D_f0": "PriorOverHiddenStatesFactor0",
    "D_f1": "PriorOverHiddenStatesFactor1",
    "s_f0": "HiddenStateFactor0",
    "s_f1": "HiddenStateFactor1",
    "s_prime_f0": "NextHiddenStateFactor0",
    "s_prime_f1": "NextHiddenStateFactor1",
    "o_m0": "ObservationModality0",
    "o_m1": "ObservationModality1",
    "o_m2": "ObservationModality2",
    "\u03c0_f1": "PolicyVectorFactor1 # Distribution over actions for factor 1",
    "u_f1": "ActionFactor1       # Chosen action for factor 1",
    "G": "ExpectedFreeEnergy"
  },
  "equations_text": "",
  "time_info": {
    "DiscreteTime": "t",
    "ModelTimeHorizon": "Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon."
  },
  "footer_text": "",
  "signature": {},
  "raw_sections": {
    "GNNSection": "MultifactorPyMDPAgent",
    "GNNVersionAndFlags": "GNN v1",
    "ModelName": "Multifactor PyMDP Agent v1",
    "ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
    "StateSpaceBlock": "# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]\nA_m0[3,2,3,type=float]   # Likelihood for modality 0 (\"state_observation\")\nA_m1[3,2,3,type=float]   # Likelihood for modality 1 (\"reward\")\nA_m2[3,2,3,type=float]   # Likelihood for modality 2 (\"decision_proprioceptive\")\n\n# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]\nB_f0[2,2,1,type=float]   # Transitions for factor 0 (\"reward_level\"), 1 implicit action (uncontrolled)\nB_f1[3,3,3,type=float]   # Transitions for factor 1 (\"decision_state\"), 3 actions\n\n# C_vectors are defined per modality: C_m[observation_outcomes]\nC_m0[3,type=float]       # Preferences for modality 0\nC_m1[3,type=float]       # Preferences for modality 1\nC_m2[3,type=float]       # Preferences for modality 2\n\n# D_vectors are defined per hidden state factor: D_f[states]\nD_f0[2,type=float]       # Prior for factor 0\nD_f1[3,type=float]       # Prior for factor 1\n\n# Hidden States\ns_f0[2,1,type=float]     # Hidden state for factor 0 (\"reward_level\")\ns_f1[3,1,type=float]     # Hidden state for factor 1 (\"decision_state\")\ns_prime_f0[2,1,type=float] # Next hidden state for factor 0\ns_prime_f1[3,1,type=float] # Next hidden state for factor 1\n\n# Observations\no_m0[3,1,type=float]     # Observation for modality 0\no_m1[3,1,type=float]     # Observation for modality 1\no_m2[3,1,type=float]     # Observation for modality 2\n\n# Policy and Control\n\u03c0_f1[3,type=float]       # Policy (distribution over actions) for controllable factor 1\nu_f1[1,type=int]         # Action taken for controllable factor 1\nG[1,type=float]          # Expected Free Energy (overall, or can be per policy)\nt[1,type=int]            # Time step",
    "Connections": "(D_f0,D_f1)-(s_f0,s_f1)\n(s_f0,s_f1)-(A_m0,A_m1,A_m2)\n(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)\n(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled\n(B_f0,B_f1)-(s_prime_f0,s_prime_f1)\n(C_m0,C_m1,C_m2)>G\nG>\u03c0_f1\n\u03c0_f1-u_f1\nG=ExpectedFreeEnergy\nt=Time",
    "InitialParameterization": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n  ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ),  # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n  ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ),  # obs=1\n  ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) )   # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n  ( (0.0,0.731,0.0), (0.0,0.269,0.0) ),  # obs=0\n  ( (0.0,0.269,0.0), (0.0,0.731,0.0) ),  # obs=1\n  ( (1.0,0.0,1.0), (1.0,0.0,1.0) )      # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n  ( (1.0,0.0,0.0), (1.0,0.0,0.0) ),  # obs=0\n  ( (0.0,1.0,0.0), (0.0,1.0,0.0) ),  # obs=1\n  ( (0.0,0.0,1.0), (0.0,0.0,1.0) )   # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n  ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n  ( (0.0),(1.0) )  # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n  ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n  ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n  ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) )  # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
    "InitialParameterization_raw_content": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n  ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ),  # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n  ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ),  # obs=1\n  ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) )   # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n  ( (0.0,0.731,0.0), (0.0,0.269,0.0) ),  # obs=0\n  ( (0.0,0.269,0.0), (0.0,0.731,0.0) ),  # obs=1\n  ( (1.0,0.0,1.0), (1.0,0.0,1.0) )      # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n  ( (1.0,0.0,0.0), (1.0,0.0,0.0) ),  # obs=0\n  ( (0.0,1.0,0.0), (0.0,1.0,0.0) ),  # obs=1\n  ( (0.0,0.0,1.0), (0.0,0.0,1.0) )   # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n  ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n  ( (0.0),(1.0) )  # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n  ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n  ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n  ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) )  # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
    "Equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
    "Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
    "ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1       # Chosen action for factor 1\nG=ExpectedFreeEnergy",
    "ModelParameters": "num_hidden_states_factors: [2, 3]  # s_f0[2], s_f1[3]\nnum_obs_modalities: [3, 3, 3]     # o_m0[3], o_m1[3], o_m2[3]\nnum_control_factors: [1, 3]   # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)",
    "Footer": "Multifactor PyMDP Agent v1 - GNN Representation",
    "Signature": "NA"
  },
  "other_sections": {},
  "gnnsection": {},
  "gnnversionandflags": {},
  "equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
  "ModelParameters": {
    "num_hidden_states_factors": "[2, 3]",
    "num_obs_modalities": "[3, 3, 3]",
    "num_control_factors": "[1, 3]"
  },
  "num_hidden_states_factors": "[2, 3]",
  "num_obs_modalities": "[3, 3, 3]",
  "num_control_factors": "[1, 3]",
  "footer": "Multifactor PyMDP Agent v1 - GNN Representation"
}
pymdp_pomdp_agent.json

Text/Log Files

pymdp_pomdp_agent.txt

GNN Model Summary: Multifactor PyMDP Agent v1
Source File: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/pymdp_pomdp_agent.md

Metadata:
  description: This model represents a PyMDP agent with multiple observation modalities and hidden state factors.
- Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes)
- Hidden state factors: "reward_level" (2 states), "decision_state" (3 states)
- Control: "decision_state" factor is controllable with 3 possible actions.
The parameterization is derived from a PyMDP Python script example.

States (20):
  - ID: A_m0 (dimensions=3,2,3,type=float, original_id=A_m0)
  - ID: A_m1 (dimensions=3,2,3,type=float, original_id=A_m1)
  - ID: A_m2 (dimensions=3,2,3,type=float, original_id=A_m2)
  - ID: B_f0 (dimensions=2,2,1,type=float, original_id=B_f0)
  - ID: B_f1 (dimensions=3,3,3,type=float, original_id=B_f1)
  - ID: C_m0 (dimensions=3,type=float, original_id=C_m0)
  - ID: C_m1 (dimensions=3,type=float, original_id=C_m1)
  - ID: C_m2 (dimensions=3,type=float, original_id=C_m2)
  - ID: D_f0 (dimensions=2,type=float, original_id=D_f0)
  - ID: D_f1 (dimensions=3,type=float, original_id=D_f1)
  - ID: s_f0 (dimensions=2,1,type=float, original_id=s_f0)
  - ID: s_f1 (dimensions=3,1,type=float, original_id=s_f1)
  - ID: s_prime_f0 (dimensions=2,1,type=float, original_id=s_prime_f0)
  - ID: s_prime_f1 (dimensions=3,1,type=float, original_id=s_prime_f1)
  - ID: o_m0 (dimensions=3,1,type=float, original_id=o_m0)
  - ID: o_m1 (dimensions=3,1,type=float, original_id=o_m1)
  - ID: o_m2 (dimensions=3,1,type=float, original_id=o_m2)
  - ID: u_f1 (dimensions=1,type=int, original_id=u_f1)
  - ID: G (dimensions=1,type=float, original_id=G)
  - ID: t (dimensions=1,type=int, original_id=t)

Initial Parameters (0):

General Parameters (0):

Observations (0):

Transitions (5):
  - None -> None
  - None -> None
  - None -> None
  - None -> None
  - None -> None

Ontology Annotations (20):
  A_m0 = LikelihoodMatrixModality0
  A_m1 = LikelihoodMatrixModality1
  A_m2 = LikelihoodMatrixModality2
  B_f0 = TransitionMatrixFactor0
  B_f1 = TransitionMatrixFactor1
  C_m0 = LogPreferenceVectorModality0
  C_m1 = LogPreferenceVectorModality1
  C_m2 = LogPreferenceVectorModality2
  D_f0 = PriorOverHiddenStatesFactor0
  D_f1 = PriorOverHiddenStatesFactor1
  s_f0 = HiddenStateFactor0
  s_f1 = HiddenStateFactor1
  s_prime_f0 = NextHiddenStateFactor0
  s_prime_f1 = NextHiddenStateFactor1
  o_m0 = ObservationModality0
  o_m1 = ObservationModality1
  o_m2 = ObservationModality2
  π_f1 = PolicyVectorFactor1 # Distribution over actions for factor 1
  u_f1 = ActionFactor1       # Chosen action for factor 1
  G = ExpectedFreeEnergy

pymdp_pomdp_agent.txt

Other Files

Exports for rxinfer_multiagent_gnn: rxinfer_multiagent_gnn

JSON Files

rxinfer_multiagent_gnn.json

{
  "file_path": "/home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md",
  "name": "Multi-agent Trajectory Planning",
  "metadata": {
    "description": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles."
  },
  "states": [
    {
      "id": "dt",
      "dimensions": "1,type=float",
      "original_id": "dt"
    },
    {
      "id": "gamma",
      "dimensions": "1,type=float",
      "original_id": "gamma"
    },
    {
      "id": "nr_steps",
      "dimensions": "1,type=int",
      "original_id": "nr_steps"
    },
    {
      "id": "nr_iterations",
      "dimensions": "1,type=int",
      "original_id": "nr_iterations"
    },
    {
      "id": "nr_agents",
      "dimensions": "1,type=int",
      "original_id": "nr_agents"
    },
    {
      "id": "softmin_temperature",
      "dimensions": "1,type=float",
      "original_id": "softmin_temperature"
    },
    {
      "id": "intermediate_steps",
      "dimensions": "1,type=int",
      "original_id": "intermediate_steps"
    },
    {
      "id": "save_intermediates",
      "dimensions": "1,type=bool",
      "original_id": "save_intermediates"
    },
    {
      "id": "A",
      "dimensions": "4,4,type=float",
      "original_id": "A"
    },
    {
      "id": "B",
      "dimensions": "4,2,type=float",
      "original_id": "B"
    },
    {
      "id": "C",
      "dimensions": "2,4,type=float",
      "original_id": "C"
    },
    {
      "id": "initial_state_variance",
      "dimensions": "1,type=float",
      "original_id": "initial_state_variance"
    },
    {
      "id": "control_variance",
      "dimensions": "1,type=float",
      "original_id": "control_variance"
    },
    {
      "id": "goal_constraint_variance",
      "dimensions": "1,type=float",
      "original_id": "goal_constraint_variance"
    },
    {
      "id": "gamma_shape",
      "dimensions": "1,type=float",
      "original_id": "gamma_shape"
    },
    {
      "id": "gamma_scale_factor",
      "dimensions": "1,type=float",
      "original_id": "gamma_scale_factor"
    },
    {
      "id": "x_limits",
      "dimensions": "2,type=float",
      "original_id": "x_limits"
    },
    {
      "id": "y_limits",
      "dimensions": "2,type=float",
      "original_id": "y_limits"
    },
    {
      "id": "fps",
      "dimensions": "1,type=int",
      "original_id": "fps"
    },
    {
      "id": "heatmap_resolution",
      "dimensions": "1,type=int",
      "original_id": "heatmap_resolution"
    },
    {
      "id": "plot_width",
      "dimensions": "1,type=int",
      "original_id": "plot_width"
    },
    {
      "id": "plot_height",
      "dimensions": "1,type=int",
      "original_id": "plot_height"
    },
    {
      "id": "agent_alpha",
      "dimensions": "1,type=float",
      "original_id": "agent_alpha"
    },
    {
      "id": "target_alpha",
      "dimensions": "1,type=float",
      "original_id": "target_alpha"
    },
    {
      "id": "color_palette",
      "dimensions": "1,type=string",
      "original_id": "color_palette"
    },
    {
      "id": "door_obstacle_center_1",
      "dimensions": "2,type=float",
      "original_id": "door_obstacle_center_1"
    },
    {
      "id": "door_obstacle_size_1",
      "dimensions": "2,type=float",
      "original_id": "door_obstacle_size_1"
    },
    {
      "id": "door_obstacle_center_2",
      "dimensions": "2,type=float",
      "original_id": "door_obstacle_center_2"
    },
    {
      "id": "door_obstacle_size_2",
      "dimensions": "2,type=float",
      "original_id": "door_obstacle_size_2"
    },
    {
      "id": "wall_obstacle_center",
      "dimensions": "2,type=float",
      "original_id": "wall_obstacle_center"
    },
    {
      "id": "wall_obstacle_size",
      "dimensions": "2,type=float",
      "original_id": "wall_obstacle_size"
    },
    {
      "id": "combined_obstacle_center_1",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_center_1"
    },
    {
      "id": "combined_obstacle_size_1",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_size_1"
    },
    {
      "id": "combined_obstacle_center_2",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_center_2"
    },
    {
      "id": "combined_obstacle_size_2",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_size_2"
    },
    {
      "id": "combined_obstacle_center_3",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_center_3"
    },
    {
      "id": "combined_obstacle_size_3",
      "dimensions": "2,type=float",
      "original_id": "combined_obstacle_size_3"
    },
    {
      "id": "agent1_id",
      "dimensions": "1,type=int",
      "original_id": "agent1_id"
    },
    {
      "id": "agent1_radius",
      "dimensions": "1,type=float",
      "original_id": "agent1_radius"
    },
    {
      "id": "agent1_initial_position",
      "dimensions": "2,type=float",
      "original_id": "agent1_initial_position"
    },
    {
      "id": "agent1_target_position",
      "dimensions": "2,type=float",
      "original_id": "agent1_target_position"
    },
    {
      "id": "agent2_id",
      "dimensions": "1,type=int",
      "original_id": "agent2_id"
    },
    {
      "id": "agent2_radius",
      "dimensions": "1,type=float",
      "original_id": "agent2_radius"
    },
    {
      "id": "agent2_initial_position",
      "dimensions": "2,type=float",
      "original_id": "agent2_initial_position"
    },
    {
      "id": "agent2_target_position",
      "dimensions": "2,type=float",
      "original_id": "agent2_target_position"
    },
    {
      "id": "agent3_id",
      "dimensions": "1,type=int",
      "original_id": "agent3_id"
    },
    {
      "id": "agent3_radius",
      "dimensions": "1,type=float",
      "original_id": "agent3_radius"
    },
    {
      "id": "agent3_initial_position",
      "dimensions": "2,type=float",
      "original_id": "agent3_initial_position"
    },
    {
      "id": "agent3_target_position",
      "dimensions": "2,type=float",
      "original_id": "agent3_target_position"
    },
    {
      "id": "agent4_id",
      "dimensions": "1,type=int",
      "original_id": "agent4_id"
    },
    {
      "id": "agent4_radius",
      "dimensions": "1,type=float",
      "original_id": "agent4_radius"
    },
    {
      "id": "agent4_initial_position",
      "dimensions": "2,type=float",
      "original_id": "agent4_initial_position"
    },
    {
      "id": "agent4_target_position",
      "dimensions": "2,type=float",
      "original_id": "agent4_target_position"
    },
    {
      "id": "experiment_seeds",
      "dimensions": "2,type=int",
      "original_id": "experiment_seeds"
    },
    {
      "id": "results_dir",
      "dimensions": "1,type=string",
      "original_id": "results_dir"
    },
    {
      "id": "animation_template",
      "dimensions": "1,type=string",
      "original_id": "animation_template"
    },
    {
      "id": "control_vis_filename",
      "dimensions": "1,type=string",
      "original_id": "control_vis_filename"
    },
    {
      "id": "obstacle_distance_filename",
      "dimensions": "1,type=string",
      "original_id": "obstacle_distance_filename"
    },
    {
      "id": "path_uncertainty_filename",
      "dimensions": "1,type=string",
      "original_id": "path_uncertainty_filename"
    },
    {
      "id": "convergence_filename",
      "dimensions": "1,type=string",
      "original_id": "convergence_filename"
    }
  ],
  "parameters": {},
  "initial_parameters": {},
  "observations": [],
  "transitions": [
    {
      "sources": [
        "dt"
      ],
      "operator": ">",
      "targets": [
        "A"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "A",
        "B",
        "C"
      ],
      "operator": ">",
      "targets": [
        "state_space_model"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "state_space_model",
        "initial_state_variance",
        "control_variance"
      ],
      "operator": ">",
      "targets": [
        "agent_trajectories"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "agent_trajectories",
        "goal_constraint_variance"
      ],
      "operator": ">",
      "targets": [
        "goal_directed_behavior"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "agent_trajectories",
        "gamma",
        "gamma_shape",
        "gamma_scale_factor"
      ],
      "operator": ">",
      "targets": [
        "obstacle_avoidance"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "agent_trajectories",
        "nr_agents"
      ],
      "operator": ">",
      "targets": [
        "collision_avoidance"
      ],
      "attributes": {}
    },
    {
      "sources": [
        "goal_directed_behavior",
        "obstacle_avoidance",
        "collision_avoidance"
      ],
      "operator": ">",
      "targets": [
        "planning_system"
      ],
      "attributes": {}
    }
  ],
  "ontology_annotations": {
    "dt": "TimeStep",
    "gamma": "ConstraintParameter",
    "nr_steps": "TrajectoryLength",
    "nr_iterations": "InferenceIterations",
    "nr_agents": "NumberOfAgents",
    "softmin_temperature": "SoftminTemperature",
    "A": "StateTransitionMatrix",
    "B": "ControlInputMatrix",
    "C": "ObservationMatrix",
    "initial_state_variance": "InitialStateVariance",
    "control_variance": "ControlVariance",
    "goal_constraint_variance": "GoalConstraintVariance"
  },
  "equations_text": "",
  "time_info": {
    "ModelTimeHorizon": "nr_steps"
  },
  "footer_text": "",
  "signature": {
    "Creator": "AI Assistant for GNN",
    "Date": "2024-07-27",
    "Status": "Example for RxInfer.jl multi-agent trajectory planning"
  },
  "raw_sections": {
    "GNNSection": "RxInferMultiAgentTrajectoryPlanning",
    "GNNVersionAndFlags": "GNN v1",
    "ModelName": "Multi-agent Trajectory Planning",
    "ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
    "StateSpaceBlock": "# Model parameters\ndt[1,type=float]               # Time step for the state space model\ngamma[1,type=float]            # Constraint parameter for the Halfspace node\nnr_steps[1,type=int]           # Number of time steps in the trajectory\nnr_iterations[1,type=int]      # Number of inference iterations\nnr_agents[1,type=int]          # Number of agents in the simulation\nsoftmin_temperature[1,type=float] # Temperature parameter for the softmin function\nintermediate_steps[1,type=int] # Intermediate results saving interval\nsave_intermediates[1,type=bool] # Whether to save intermediate results\n\n# State space matrices\nA[4,4,type=float]              # State transition matrix\nB[4,2,type=float]              # Control input matrix\nC[2,4,type=float]              # Observation matrix\n\n# Prior distributions\ninitial_state_variance[1,type=float]    # Prior on initial state\ncontrol_variance[1,type=float]          # Prior on control inputs\ngoal_constraint_variance[1,type=float]  # Goal constraints variance\ngamma_shape[1,type=float]               # Parameters for GammaShapeRate prior\ngamma_scale_factor[1,type=float]        # Parameters for GammaShapeRate prior\n\n# Visualization parameters\nx_limits[2,type=float]            # Plot boundaries (x-axis)\ny_limits[2,type=float]            # Plot boundaries (y-axis)\nfps[1,type=int]                   # Animation frames per second\nheatmap_resolution[1,type=int]    # Heatmap resolution\nplot_width[1,type=int]            # Plot width\nplot_height[1,type=int]           # Plot height\nagent_alpha[1,type=float]         # Visualization alpha for agents\ntarget_alpha[1,type=float]        # Visualization alpha for targets\ncolor_palette[1,type=string]      # Color palette for visualization\n\n# Environment definitions\ndoor_obstacle_center_1[2,type=float]    # Door environment, obstacle 1 center\ndoor_obstacle_size_1[2,type=float]      # Door environment, obstacle 1 size\ndoor_obstacle_center_2[2,type=float]    # Door environment, obstacle 2 center\ndoor_obstacle_size_2[2,type=float]      # Door environment, obstacle 2 size\n\nwall_obstacle_center[2,type=float]      # Wall environment, obstacle center\nwall_obstacle_size[2,type=float]        # Wall environment, obstacle size\n\ncombined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center\ncombined_obstacle_size_1[2,type=float]   # Combined environment, obstacle 1 size\ncombined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center\ncombined_obstacle_size_2[2,type=float]   # Combined environment, obstacle 2 size\ncombined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center\ncombined_obstacle_size_3[2,type=float]   # Combined environment, obstacle 3 size\n\n# Agent configurations\nagent1_id[1,type=int]                   # Agent 1 ID\nagent1_radius[1,type=float]             # Agent 1 radius\nagent1_initial_position[2,type=float]   # Agent 1 initial position\nagent1_target_position[2,type=float]    # Agent 1 target position\n\nagent2_id[1,type=int]                   # Agent 2 ID\nagent2_radius[1,type=float]             # Agent 2 radius\nagent2_initial_position[2,type=float]   # Agent 2 initial position\nagent2_target_position[2,type=float]    # Agent 2 target position\n\nagent3_id[1,type=int]                   # Agent 3 ID\nagent3_radius[1,type=float]             # Agent 3 radius\nagent3_initial_position[2,type=float]   # Agent 3 initial position\nagent3_target_position[2,type=float]    # Agent 3 target position\n\nagent4_id[1,type=int]                   # Agent 4 ID\nagent4_radius[1,type=float]             # Agent 4 radius\nagent4_initial_position[2,type=float]   # Agent 4 initial position\nagent4_target_position[2,type=float]    # Agent 4 target position\n\n# Experiment configurations\nexperiment_seeds[2,type=int]            # Random seeds for reproducibility\nresults_dir[1,type=string]              # Base directory for results\nanimation_template[1,type=string]       # Filename template for animations\ncontrol_vis_filename[1,type=string]     # Filename for control visualization\nobstacle_distance_filename[1,type=string] # Filename for obstacle distance plot\npath_uncertainty_filename[1,type=string]  # Filename for path uncertainty plot\nconvergence_filename[1,type=string]       # Filename for convergence plot",
    "Connections": "# Model parameters\ndt > A\n(A, B, C) > state_space_model\n\n# Agent trajectories\n(state_space_model, initial_state_variance, control_variance) > agent_trajectories\n\n# Goal constraints\n(agent_trajectories, goal_constraint_variance) > goal_directed_behavior\n\n# Obstacle avoidance\n(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance\n\n# Collision avoidance\n(agent_trajectories, nr_agents) > collision_avoidance\n\n# Complete planning system\n(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system",
    "InitialParameterization": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
    "InitialParameterization_raw_content": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
    "Equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t,  w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t,                v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
    "Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
    "ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance",
    "ModelParameters": "nr_agents=4\nnr_steps=40\nnr_iterations=350",
    "Footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl",
    "Signature": "Creator: AI Assistant for GNN\nDate: 2024-07-27\nStatus: Example for RxInfer.jl multi-agent trajectory planning"
  },
  "other_sections": {},
  "gnnsection": {},
  "gnnversionandflags": {},
  "equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t,  w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t,                v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
  "ModelParameters": {},
  "footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl"
}
rxinfer_multiagent_gnn.json

Text/Log Files

rxinfer_multiagent_gnn.txt

GNN Model Summary: Multi-agent Trajectory Planning
Source File: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn/examples/rxinfer_multiagent_gnn.md

Metadata:
  description: This model represents a multi-agent trajectory planning scenario in RxInfer.jl.
It includes:
- State space model for agents moving in a 2D environment
- Obstacle avoidance constraints
- Goal-directed behavior
- Inter-agent collision avoidance
The model can be used to simulate trajectory planning in various environments with obstacles.

States (60):
  - ID: dt (dimensions=1,type=float, original_id=dt)
  - ID: gamma (dimensions=1,type=float, original_id=gamma)
  - ID: nr_steps (dimensions=1,type=int, original_id=nr_steps)
  - ID: nr_iterations (dimensions=1,type=int, original_id=nr_iterations)
  - ID: nr_agents (dimensions=1,type=int, original_id=nr_agents)
  - ID: softmin_temperature (dimensions=1,type=float, original_id=softmin_temperature)
  - ID: intermediate_steps (dimensions=1,type=int, original_id=intermediate_steps)
  - ID: save_intermediates (dimensions=1,type=bool, original_id=save_intermediates)
  - ID: A (dimensions=4,4,type=float, original_id=A)
  - ID: B (dimensions=4,2,type=float, original_id=B)
  - ID: C (dimensions=2,4,type=float, original_id=C)
  - ID: initial_state_variance (dimensions=1,type=float, original_id=initial_state_variance)
  - ID: control_variance (dimensions=1,type=float, original_id=control_variance)
  - ID: goal_constraint_variance (dimensions=1,type=float, original_id=goal_constraint_variance)
  - ID: gamma_shape (dimensions=1,type=float, original_id=gamma_shape)
  - ID: gamma_scale_factor (dimensions=1,type=float, original_id=gamma_scale_factor)
  - ID: x_limits (dimensions=2,type=float, original_id=x_limits)
  - ID: y_limits (dimensions=2,type=float, original_id=y_limits)
  - ID: fps (dimensions=1,type=int, original_id=fps)
  - ID: heatmap_resolution (dimensions=1,type=int, original_id=heatmap_resolution)
  - ID: plot_width (dimensions=1,type=int, original_id=plot_width)
  - ID: plot_height (dimensions=1,type=int, original_id=plot_height)
  - ID: agent_alpha (dimensions=1,type=float, original_id=agent_alpha)
  - ID: target_alpha (dimensions=1,type=float, original_id=target_alpha)
  - ID: color_palette (dimensions=1,type=string, original_id=color_palette)
  - ID: door_obstacle_center_1 (dimensions=2,type=float, original_id=door_obstacle_center_1)
  - ID: door_obstacle_size_1 (dimensions=2,type=float, original_id=door_obstacle_size_1)
  - ID: door_obstacle_center_2 (dimensions=2,type=float, original_id=door_obstacle_center_2)
  - ID: door_obstacle_size_2 (dimensions=2,type=float, original_id=door_obstacle_size_2)
  - ID: wall_obstacle_center (dimensions=2,type=float, original_id=wall_obstacle_center)
  - ID: wall_obstacle_size (dimensions=2,type=float, original_id=wall_obstacle_size)
  - ID: combined_obstacle_center_1 (dimensions=2,type=float, original_id=combined_obstacle_center_1)
  - ID: combined_obstacle_size_1 (dimensions=2,type=float, original_id=combined_obstacle_size_1)
  - ID: combined_obstacle_center_2 (dimensions=2,type=float, original_id=combined_obstacle_center_2)
  - ID: combined_obstacle_size_2 (dimensions=2,type=float, original_id=combined_obstacle_size_2)
  - ID: combined_obstacle_center_3 (dimensions=2,type=float, original_id=combined_obstacle_center_3)
  - ID: combined_obstacle_size_3 (dimensions=2,type=float, original_id=combined_obstacle_size_3)
  - ID: agent1_id (dimensions=1,type=int, original_id=agent1_id)
  - ID: agent1_radius (dimensions=1,type=float, original_id=agent1_radius)
  - ID: agent1_initial_position (dimensions=2,type=float, original_id=agent1_initial_position)
  - ID: agent1_target_position (dimensions=2,type=float, original_id=agent1_target_position)
  - ID: agent2_id (dimensions=1,type=int, original_id=agent2_id)
  - ID: agent2_radius (dimensions=1,type=float, original_id=agent2_radius)
  - ID: agent2_initial_position (dimensions=2,type=float, original_id=agent2_initial_position)
  - ID: agent2_target_position (dimensions=2,type=float, original_id=agent2_target_position)
  - ID: agent3_id (dimensions=1,type=int, original_id=agent3_id)
  - ID: agent3_radius (dimensions=1,type=float, original_id=agent3_radius)
  - ID: agent3_initial_position (dimensions=2,type=float, original_id=agent3_initial_position)
  - ID: agent3_target_position (dimensions=2,type=float, original_id=agent3_target_position)
  - ID: agent4_id (dimensions=1,type=int, original_id=agent4_id)
  - ID: agent4_radius (dimensions=1,type=float, original_id=agent4_radius)
  - ID: agent4_initial_position (dimensions=2,type=float, original_id=agent4_initial_position)
  - ID: agent4_target_position (dimensions=2,type=float, original_id=agent4_target_position)
  - ID: experiment_seeds (dimensions=2,type=int, original_id=experiment_seeds)
  - ID: results_dir (dimensions=1,type=string, original_id=results_dir)
  - ID: animation_template (dimensions=1,type=string, original_id=animation_template)
  - ID: control_vis_filename (dimensions=1,type=string, original_id=control_vis_filename)
  - ID: obstacle_distance_filename (dimensions=1,type=string, original_id=obstacle_distance_filename)
  - ID: path_uncertainty_filename (dimensions=1,type=string, original_id=path_uncertainty_filename)
  - ID: convergence_filename (dimensions=1,type=string, original_id=convergence_filename)

Initial Parameters (0):

General Parameters (0):

Observations (0):

Transitions (7):
  - None -> None
  - None -> None
  - None -> None
  - None -> None
  - None -> None
  - None -> None
  - None -> None

Ontology Annotations (12):
  dt = TimeStep
  gamma = ConstraintParameter
  nr_steps = TrajectoryLength
  nr_iterations = InferenceIterations
  nr_agents = NumberOfAgents
  softmin_temperature = SoftminTemperature
  A = StateTransitionMatrix
  B = ControlInputMatrix
  C = ObservationMatrix
  initial_state_variance = InitialStateVariance

... (file truncated, total lines: 103)
rxinfer_multiagent_gnn.txt

Other Files

GNN Processing Summary (Overall File List)

📊 GNN Processing Summary

🗓️ Generated: 2025-06-06 13:08:09

⚙️ Processing Configuration

📁 GNN Files Discovered

Found 2 GNN files for processing:

🔄 Pipeline Execution Status

Pipeline execution data not available.

📊 Output Summary

🔍 Key Findings

📋 Recommendations

General Improvements


Report generated by GNN Processing Pipeline Step 5 (Export)

GNN Visualizations (Step 6)

Visualizations for pymdp_pomdp_agent: pymdp_pomdp_agent

Images

Markdown Reports

file_content.md

GNN File: src/gnn/examples/pymdp_pomdp_agent.md\n\n## Raw File Content\n\n```\n# GNN Example: Multifactor PyMDP Agent

Format: Markdown representation of a Multifactor PyMDP model in Active Inference format

Version: 1.0

This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.

GNNSection

MultifactorPyMDPAgent

GNNVersionAndFlags

GNN v1

ModelName

Multifactor PyMDP Agent v1

ModelAnnotation

This model represents a PyMDP agent with multiple observation modalities and hidden state factors. - Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes) - Hidden state factors: "reward_level" (2 states), "decision_state" (3 states) - Control: "decision_state" factor is controllable with 3 possible actions. The parameterization is derived from a PyMDP Python script example.

StateSpaceBlock

A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]

A_m0[3,2,3,type=float] # Likelihood for modality 0 ("state_observation") A_m1[3,2,3,type=float] # Likelihood for modality 1 ("reward") A_m2[3,2,3,type=float] # Likelihood for modality 2 ("decision_proprioceptive")

B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]

B_f0[2,2,1,type=float] # Transitions for factor 0 ("reward_level"), 1 implicit action (uncontrolled) B_f1[3,3,3,type=float] # Transitions for factor 1 ("decision_state"), 3 actions

C_vectors are defined per modality: C_m[observation_outcomes]

C_m0[3,type=float] # Preferences for modality 0 C_m1[3,type=float] # Preferences for modality 1 C_m2[3,type=float] # Preferences for modality 2

D_vectors are defined per hidden state factor: D_f[states]

D_f0[2,type=float] # Prior for factor 0 D_f1[3,type=float] # Prior for factor 1

Hidden States

s_f0[2,1,type=float] # Hidden state for factor 0 ("reward_level") s_f1[3,1,type=float] # Hidden state for factor 1 ("decision_state") s_prime_f0[2,1,type=float] # Next hidden state for factor 0 s_prime_f1[3,1,type=float] # Next hidden state for factor 1

Observations

o_m0[3,1,type=float] # Observation for modality 0 o_m1[3,1,type=float] # Observation for modality 1 o_m2[3,1,type=float] # Observation for modality 2

Policy and Control

π_f1[3,type=float] # Policy (distribution over actions) for controllable factor 1 u_f1[1,type=int] # Action taken for controllable factor 1 G[1,type=float] # Expected Free Energy (overall, or can be per policy) t[1,type=int] # Time step

Connections

(D_f0,D_f1)-(s_f0,s_f1) (s_f0,s_f1)-(A_m0,A_m1,A_m2) (A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2) (s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled (B_f0,B_f1)-(s_prime_f0,s_prime_f1) (C_m0,C_m1,C_m2)>G G>π_f1 π_f1-u_f1 G=ExpectedFreeEnergy t=Time

InitialParameterization

A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]

A[0][:, :, 0] = np.ones((3,2))/3

A[0][:, :, 1] = np.ones((3,2))/3

A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)

A_m0={ ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ), # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1) ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ), # obs=1 ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) ) # obs=2 }

A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3

A[1][2, :, 0] = [1.0,1.0]

A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]

A[1][2, :, 2] = [1.0,1.0]

Others are 0.

A_m1={ ( (0.0,0.731,0.0), (0.0,0.269,0.0) ), # obs=0 ( (0.0,0.269,0.0), (0.0,0.731,0.0) ), # obs=1 ( (1.0,0.0,1.0), (1.0,0.0,1.0) ) # obs=2 }

A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3

A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0

Others are 0.

A_m2={ ( (1.0,0.0,0.0), (1.0,0.0,0.0) ), # obs=0 ( (0.0,1.0,0.0), (0.0,1.0,0.0) ), # obs=1 ( (0.0,0.0,1.0), (0.0,0.0,1.0) ) # obs=2 }

B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]

B_f0 = eye(2)

B_f0={ ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0) ( (0.0),(1.0) ) # s_next=1 }

B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]

B_f1[:,:,action_idx] = eye(3) for each action

B_f1={ ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ... ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1 ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) ) # s_next=2 }

C_m0: num_obs[0]=3. Defaults to zeros.

C_m0={(0.0,0.0,0.0)}

C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0

C_m1={(1.0,-2.0,0.0)}

C_m2: num_obs[2]=3. Defaults to zeros.

C_m2={(0.0,0.0,0.0)}

D_f0: factor 0 (2 states). Uniform prior.

D_f0={(0.5,0.5)}

D_f1: factor 1 (3 states). Uniform prior.

D_f1={(0.33333,0.33333,0.33333)}

Equations

Standard PyMDP agent equations for state inference (infer_states),

policy inference (infer_policies), and action sampling (sample_action).

qs = infer_states(o)

q_pi, efe = infer_policies()

action = sample_action()

Time

Dynamic DiscreteTime=t ModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.

ActInfOntologyAnnotation

A_m0=LikelihoodMatrixModality0 A_m1=LikelihoodMatrixModality1 A_m2=LikelihoodMatrixModality2 B_f0=TransitionMatrixFactor0 B_f1=TransitionMatrixFactor1 C_m0=LogPreferenceVectorModality0 C_m1=LogPreferenceVectorModality1 C_m2=LogPreferenceVectorModality2 D_f0=PriorOverHiddenStatesFactor0 D_f1=PriorOverHiddenStatesFactor1 s_f0=HiddenStateFactor0 s_f1=HiddenStateFactor1 s_prime_f0=NextHiddenStateFactor0 s_prime_f1=NextHiddenStateFactor1 o_m0=ObservationModality0 o_m1=ObservationModality1 o_m2=ObservationModality2 π_f1=PolicyVectorFactor1 # Distribution over actions for factor 1 u_f1=ActionFactor1 # Chosen action for factor 1 G=ExpectedFreeEnergy

ModelParameters

num_hidden_states_factors: [2, 3] # s_f0[2], s_f1[3] num_obs_modalities: [3, 3, 3] # o_m0[3], o_m1[3], o_m2[3] num_control_factors: [1, 3] # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)

Footer

Multifactor PyMDP Agent v1 - GNN Representation

Signature

NA \n```\n\n## Parsed Sections

_HeaderComments

# GNN Example: Multifactor PyMDP Agent
# Format: Markdown representation of a Multifactor PyMDP model in Active Inference format
# Version: 1.0
# This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.

ModelName

Multifactor PyMDP Agent v1

GNNSection

MultifactorPyMDPAgent

GNNVersionAndFlags

GNN v1

ModelAnnotation

This model represents a PyMDP agent with multiple observation modalities and hidden state factors.
- Observation modalities: "state_observation" (3 outcomes), "reward" (3 outcomes), "decision_proprioceptive" (3 outcomes)
- Hidden state factors: "reward_level" (2 states), "decision_state" (3 states)
- Control: "decision_state" factor is controllable with 3 possible actions.
The parameterization is derived from a PyMDP Python script example.

StateSpaceBlock

# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]
A_m0[3,2,3,type=float]   # Likelihood for modality 0 ("state_observation")
A_m1[3,2,3,type=float]   # Likelihood for modality 1 ("reward")
A_m2[3,2,3,type=float]   # Likelihood for modality 2 ("decision_proprioceptive")

# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]
B_f0[2,2,1,type=float]   # Transitions for factor 0 ("reward_level"), 1 implicit action (uncontrolled)
B_f1[3,3,3,type=float]   # Transitions for factor 1 ("decision_state"), 3 actions

# C_vectors are defined per modality: C_m[observation_outcomes]
C_m0[3,type=float]       # Preferences for modality 0
C_m1[3,type=float]       # Preferences for modality 1
C_m2[3,type=float]       # Preferences for modality 2

# D_vectors are defined per hidden state factor: D_f[states]
D_f0[2,type=float]       # Prior for factor 0
D_f1[3,type=float]       # Prior for factor 1

# Hidden States
s_f0[2,1,type=float]     # Hidden state for factor 0 ("reward_level")
s_f1[3,1,type=float]     # Hidden state for factor 1 ("decision_state")
s_prime_f0[2,1,type=float] # Next hidden state for factor 0
s_prime_f1[3,1,type=float] # Next hidden state for factor 1

# Observations
o_m0[3,1,type=float]     # Observation for modality 0
o_m1[3,1,type=float]     # Observation for modality 1
o_m2[3,1,type=float]     # Observation for modality 2

# Policy and Control
π_f1[3,type=float]       # Policy (distribution over actions) for controllable factor 1
u_f1[1,type=int]         # Action taken for controllable factor 1
G[1,type=float]          # Expected Free Energy (overall, or can be per policy)
t[1,type=int]            # Time step

Connections

(D_f0,D_f1)-(s_f0,s_f1)
(s_f0,s_f1)-(A_m0,A_m1,A_m2)
(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)
(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled
(B_f0,B_f1)-(s_prime_f0,s_prime_f1)
(C_m0,C_m1,C_m2)>G
G>π_f1
π_f1-u_f1
G=ExpectedFreeEnergy
t=Time

InitialParameterization

# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]
# A[0][:, :, 0] = np.ones((3,2))/3
# A[0][:, :, 1] = np.ones((3,2))/3
# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)
A_m0={
  ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ),  # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)
  ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ),  # obs=1
  ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) )   # obs=2
}

# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3
# A[1][2, :, 0] = [1.0,1.0]
# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]
# A[1][2, :, 2] = [1.0,1.0]
# Others are 0.
A_m1={
  ( (0.0,0.731,0.0), (0.0,0.269,0.0) ),  # obs=0
  ( (0.0,0.269,0.0), (0.0,0.731,0.0) ),  # obs=1
  ( (1.0,0.0,1.0), (1.0,0.0,1.0) )      # obs=2
}

# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3
# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0
# Others are 0.
A_m2={
  ( (1.0,0.0,0.0), (1.0,0.0,0.0) ),  # obs=0
  ( (0.0,1.0,0.0), (0.0,1.0,0.0) ),  # obs=1
  ( (0.0,0.0,1.0), (0.0,0.0,1.0) )   # obs=2
}

# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]
# B_f0 = eye(2)
B_f0={
  ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)
  ( (0.0),(1.0) )  # s_next=1
}

# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]
# B_f1[:,:,action_idx] = eye(3) for each action
B_f1={
  ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...
  ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1
  ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) )  # s_next=2
}

# C_m0: num_obs[0]=3. Defaults to zeros.
C_m0={(0.0,0.0,0.0)}

# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0
C_m1={(1.0,-2.0,0.0)}

# C_m2: num_obs[2]=3. Defaults to zeros.
C_m2={(0.0,0.0,0.0)}

# D_f0: factor 0 (2 states). Uniform prior.
D_f0={(0.5,0.5)}

# D_f1: factor 1 (3 states). Uniform prior.
D_f1={(0.33333,0.33333,0.33333)}

Equations

# Standard PyMDP agent equations for state inference (infer_states),
# policy inference (infer_policies), and action sampling (sample_action).
# qs = infer_states(o)
# q_pi, efe = infer_policies()
# action = sample_action()

Time

Dynamic
DiscreteTime=t
ModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.

ActInfOntologyAnnotation

A_m0=LikelihoodMatrixModality0
A_m1=LikelihoodMatrixModality1
A_m2=LikelihoodMatrixModality2
B_f0=TransitionMatrixFactor0
B_f1=TransitionMatrixFactor1
C_m0=LogPreferenceVectorModality0
C_m1=LogPreferenceVectorModality1
C_m2=LogPreferenceVectorModality2
D_f0=PriorOverHiddenStatesFactor0
D_f1=PriorOverHiddenStatesFactor1
s_f0=HiddenStateFactor0
s_f1=HiddenStateFactor1
s_prime_f0=NextHiddenStateFactor0
s_prime_f1=NextHiddenStateFactor1
o_m0=ObservationModality0
o_m1=ObservationModality1
o_m2=ObservationModality2
π_f1=PolicyVectorFactor1 # Distribution over actions for factor 1
u_f1=ActionFactor1       # Chosen action for factor 1
G=ExpectedFreeEnergy

ModelParameters

num_hidden_states_factors: [2, 3]  # s_f0[2], s_f1[3]
num_obs_modalities: [3, 3, 3]     # o_m0[3], o_m1[3], o_m2[3]
num_control_factors: [1, 3]   # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)

Footer

Multifactor PyMDP Agent v1 - GNN Representation

Signature

NA

JSON Files

full_model_data.json

{
  "_HeaderComments": "# GNN Example: Multifactor PyMDP Agent\n# Format: Markdown representation of a Multifactor PyMDP model in Active Inference format\n# Version: 1.0\n# This file is machine-readable and attempts to represent a PyMDP agent with multiple observation modalities and hidden state factors.",
  "ModelName": "Multifactor PyMDP Agent v1",
  "GNNSection": "MultifactorPyMDPAgent",
  "GNNVersionAndFlags": "GNN v1",
  "ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
  "StateSpaceBlock": "# A_matrices are defined per modality: A_m[observation_outcomes, state_factor0_states, state_factor1_states]\nA_m0[3,2,3,type=float]   # Likelihood for modality 0 (\"state_observation\")\nA_m1[3,2,3,type=float]   # Likelihood for modality 1 (\"reward\")\nA_m2[3,2,3,type=float]   # Likelihood for modality 2 (\"decision_proprioceptive\")\n\n# B_matrices are defined per hidden state factor: B_f[states_next, states_previous, actions]\nB_f0[2,2,1,type=float]   # Transitions for factor 0 (\"reward_level\"), 1 implicit action (uncontrolled)\nB_f1[3,3,3,type=float]   # Transitions for factor 1 (\"decision_state\"), 3 actions\n\n# C_vectors are defined per modality: C_m[observation_outcomes]\nC_m0[3,type=float]       # Preferences for modality 0\nC_m1[3,type=float]       # Preferences for modality 1\nC_m2[3,type=float]       # Preferences for modality 2\n\n# D_vectors are defined per hidden state factor: D_f[states]\nD_f0[2,type=float]       # Prior for factor 0\nD_f1[3,type=float]       # Prior for factor 1\n\n# Hidden States\ns_f0[2,1,type=float]     # Hidden state for factor 0 (\"reward_level\")\ns_f1[3,1,type=float]     # Hidden state for factor 1 (\"decision_state\")\ns_prime_f0[2,1,type=float] # Next hidden state for factor 0\ns_prime_f1[3,1,type=float] # Next hidden state for factor 1\n\n# Observations\no_m0[3,1,type=float]     # Observation for modality 0\no_m1[3,1,type=float]     # Observation for modality 1\no_m2[3,1,type=float]     # Observation for modality 2\n\n# Policy and Control\n\u03c0_f1[3,type=float]       # Policy (distribution over actions) for controllable factor 1\nu_f1[1,type=int]         # Action taken for controllable factor 1\nG[1,type=float]          # Expected Free Energy (overall, or can be per policy)\nt[1,type=int]            # Time step",
  "Connections": "(D_f0,D_f1)-(s_f0,s_f1)\n(s_f0,s_f1)-(A_m0,A_m1,A_m2)\n(A_m0,A_m1,A_m2)-(o_m0,o_m1,o_m2)\n(s_f0,s_f1,u_f1)-(B_f0,B_f1) # u_f1 primarily affects B_f1; B_f0 is uncontrolled\n(B_f0,B_f1)-(s_prime_f0,s_prime_f1)\n(C_m0,C_m1,C_m2)>G\nG>\u03c0_f1\n\u03c0_f1-u_f1\nG=ExpectedFreeEnergy\nt=Time",
  "InitialParameterization": "# A_m0: num_obs[0]=3, num_states[0]=2, num_states[1]=3. Format: A[obs_idx][state_f0_idx][state_f1_idx]\n# A[0][:, :, 0] = np.ones((3,2))/3\n# A[0][:, :, 1] = np.ones((3,2))/3\n# A[0][:, :, 2] = [[0.8,0.2],[0.0,0.0],[0.2,0.8]] (obs x state_f0 for state_f1=2)\nA_m0={\n  ( (0.33333,0.33333,0.8), (0.33333,0.33333,0.2) ),  # obs=0; (vals for s_f1 over s_f0=0), (vals for s_f1 over s_f0=1)\n  ( (0.33333,0.33333,0.0), (0.33333,0.33333,0.0) ),  # obs=1\n  ( (0.33333,0.33333,0.2), (0.33333,0.33333,0.8) )   # obs=2\n}\n\n# A_m1: num_obs[1]=3, num_states[0]=2, num_states[1]=3\n# A[1][2, :, 0] = [1.0,1.0]\n# A[1][0:2, :, 1] = softmax([[1,0],[0,1]]) approx [[0.731,0.269],[0.269,0.731]]\n# A[1][2, :, 2] = [1.0,1.0]\n# Others are 0.\nA_m1={\n  ( (0.0,0.731,0.0), (0.0,0.269,0.0) ),  # obs=0\n  ( (0.0,0.269,0.0), (0.0,0.731,0.0) ),  # obs=1\n  ( (1.0,0.0,1.0), (1.0,0.0,1.0) )      # obs=2\n}\n\n# A_m2: num_obs[2]=3, num_states[0]=2, num_states[1]=3\n# A[2][0,:,0]=1.0; A[2][1,:,1]=1.0; A[2][2,:,2]=1.0\n# Others are 0.\nA_m2={\n  ( (1.0,0.0,0.0), (1.0,0.0,0.0) ),  # obs=0\n  ( (0.0,1.0,0.0), (0.0,1.0,0.0) ),  # obs=1\n  ( (0.0,0.0,1.0), (0.0,0.0,1.0) )   # obs=2\n}\n\n# B_f0: factor 0 (2 states), uncontrolled (1 action). Format B[s_next, s_prev, action=0]\n# B_f0 = eye(2)\nB_f0={\n  ( (1.0),(0.0) ), # s_next=0; (vals for s_prev over action=0)\n  ( (0.0),(1.0) )  # s_next=1\n}\n\n# B_f1: factor 1 (3 states), 3 actions. Format B[s_next, s_prev, action_idx]\n# B_f1[:,:,action_idx] = eye(3) for each action\nB_f1={\n  ( (1.0,1.0,1.0), (0.0,0.0,0.0), (0.0,0.0,0.0) ), # s_next=0; (vals for actions over s_prev=0), (vals for actions over s_prev=1), ...\n  ( (0.0,0.0,0.0), (1.0,1.0,1.0), (0.0,0.0,0.0) ), # s_next=1\n  ( (0.0,0.0,0.0), (0.0,0.0,0.0), (1.0,1.0,1.0) )  # s_next=2\n}\n\n# C_m0: num_obs[0]=3. Defaults to zeros.\nC_m0={(0.0,0.0,0.0)}\n\n# C_m1: num_obs[1]=3. C[1][0]=1.0, C[1][1]=-2.0\nC_m1={(1.0,-2.0,0.0)}\n\n# C_m2: num_obs[2]=3. Defaults to zeros.\nC_m2={(0.0,0.0,0.0)}\n\n# D_f0: factor 0 (2 states). Uniform prior.\nD_f0={(0.5,0.5)}\n\n# D_f1: factor 1 (3 states). Uniform prior.\nD_f1={(0.33333,0.33333,0.33333)}",
  "Equations": "# Standard PyMDP agent equations for state inference (infer_states),\n# policy inference (infer_policies), and action sampling (sample_action).\n# qs = infer_states(o)\n# q_pi, efe = infer_policies()\n# action = sample_action()",
  "Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
  "ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1       # Chosen action for factor 1\nG=ExpectedFreeEnergy",
  "ModelParameters": "num_hidden_states_factors: [2, 3]  # s_f0[2], s_f1[3]\nnum_obs_modalities: [3, 3, 3]     # o_m0[3], o_m1[3], o_m2[3]\nnum_control_factors: [1, 3]   # B_f0 actions_dim=1 (uncontrolled), B_f1 actions_dim=3 (controlled by pi_f1)",
  "Footer": "Multifactor PyMDP Agent v1 - GNN Representation",
  "Signature": "NA"
}
full_model_data.json

model_metadata.json

{
  "ModelName": "Multifactor PyMDP Agent v1",
  "ModelAnnotation": "This model represents a PyMDP agent with multiple observation modalities and hidden state factors.\n- Observation modalities: \"state_observation\" (3 outcomes), \"reward\" (3 outcomes), \"decision_proprioceptive\" (3 outcomes)\n- Hidden state factors: \"reward_level\" (2 states), \"decision_state\" (3 states)\n- Control: \"decision_state\" factor is controllable with 3 possible actions.\nThe parameterization is derived from a PyMDP Python script example.",
  "GNNVersionAndFlags": "GNN v1",
  "Time": "Dynamic\nDiscreteTime=t\nModelTimeHorizon=Unbounded # Agent definition is generally unbounded, specific simulation runs have a horizon.",
  "ActInfOntologyAnnotation": "A_m0=LikelihoodMatrixModality0\nA_m1=LikelihoodMatrixModality1\nA_m2=LikelihoodMatrixModality2\nB_f0=TransitionMatrixFactor0\nB_f1=TransitionMatrixFactor1\nC_m0=LogPreferenceVectorModality0\nC_m1=LogPreferenceVectorModality1\nC_m2=LogPreferenceVectorModality2\nD_f0=PriorOverHiddenStatesFactor0\nD_f1=PriorOverHiddenStatesFactor1\ns_f0=HiddenStateFactor0\ns_f1=HiddenStateFactor1\ns_prime_f0=NextHiddenStateFactor0\ns_prime_f1=NextHiddenStateFactor1\no_m0=ObservationModality0\no_m1=ObservationModality1\no_m2=ObservationModality2\n\u03c0_f1=PolicyVectorFactor1 # Distribution over actions for factor 1\nu_f1=ActionFactor1       # Chosen action for factor 1\nG=ExpectedFreeEnergy"
}
model_metadata.json

Visualizations for rxinfer_multiagent_gnn: rxinfer_multiagent_gnn

Images

Markdown Reports

file_content.md

GNN File: src/gnn/examples/rxinfer_multiagent_gnn.md\n\n## Raw File Content\n\n```\n# GNN Example: RxInfer Multi-agent Trajectory Planning

Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl

Version: 1.0

This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.

GNNSection

RxInferMultiAgentTrajectoryPlanning

GNNVersionAndFlags

GNN v1

ModelName

Multi-agent Trajectory Planning

ModelAnnotation

This model represents a multi-agent trajectory planning scenario in RxInfer.jl. It includes: - State space model for agents moving in a 2D environment - Obstacle avoidance constraints - Goal-directed behavior - Inter-agent collision avoidance The model can be used to simulate trajectory planning in various environments with obstacles.

StateSpaceBlock

Model parameters

dt[1,type=float] # Time step for the state space model gamma[1,type=float] # Constraint parameter for the Halfspace node nr_steps[1,type=int] # Number of time steps in the trajectory nr_iterations[1,type=int] # Number of inference iterations nr_agents[1,type=int] # Number of agents in the simulation softmin_temperature[1,type=float] # Temperature parameter for the softmin function intermediate_steps[1,type=int] # Intermediate results saving interval save_intermediates[1,type=bool] # Whether to save intermediate results

State space matrices

A[4,4,type=float] # State transition matrix B[4,2,type=float] # Control input matrix C[2,4,type=float] # Observation matrix

Prior distributions

initial_state_variance[1,type=float] # Prior on initial state control_variance[1,type=float] # Prior on control inputs goal_constraint_variance[1,type=float] # Goal constraints variance gamma_shape[1,type=float] # Parameters for GammaShapeRate prior gamma_scale_factor[1,type=float] # Parameters for GammaShapeRate prior

Visualization parameters

x_limits[2,type=float] # Plot boundaries (x-axis) y_limits[2,type=float] # Plot boundaries (y-axis) fps[1,type=int] # Animation frames per second heatmap_resolution[1,type=int] # Heatmap resolution plot_width[1,type=int] # Plot width plot_height[1,type=int] # Plot height agent_alpha[1,type=float] # Visualization alpha for agents target_alpha[1,type=float] # Visualization alpha for targets color_palette[1,type=string] # Color palette for visualization

Environment definitions

door_obstacle_center_1[2,type=float] # Door environment, obstacle 1 center door_obstacle_size_1[2,type=float] # Door environment, obstacle 1 size door_obstacle_center_2[2,type=float] # Door environment, obstacle 2 center door_obstacle_size_2[2,type=float] # Door environment, obstacle 2 size

wall_obstacle_center[2,type=float] # Wall environment, obstacle center wall_obstacle_size[2,type=float] # Wall environment, obstacle size

combined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center combined_obstacle_size_1[2,type=float] # Combined environment, obstacle 1 size combined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center combined_obstacle_size_2[2,type=float] # Combined environment, obstacle 2 size combined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center combined_obstacle_size_3[2,type=float] # Combined environment, obstacle 3 size

Agent configurations

agent1_id[1,type=int] # Agent 1 ID agent1_radius[1,type=float] # Agent 1 radius agent1_initial_position[2,type=float] # Agent 1 initial position agent1_target_position[2,type=float] # Agent 1 target position

agent2_id[1,type=int] # Agent 2 ID agent2_radius[1,type=float] # Agent 2 radius agent2_initial_position[2,type=float] # Agent 2 initial position agent2_target_position[2,type=float] # Agent 2 target position

agent3_id[1,type=int] # Agent 3 ID agent3_radius[1,type=float] # Agent 3 radius agent3_initial_position[2,type=float] # Agent 3 initial position agent3_target_position[2,type=float] # Agent 3 target position

agent4_id[1,type=int] # Agent 4 ID agent4_radius[1,type=float] # Agent 4 radius agent4_initial_position[2,type=float] # Agent 4 initial position agent4_target_position[2,type=float] # Agent 4 target position

Experiment configurations

experiment_seeds[2,type=int] # Random seeds for reproducibility results_dir[1,type=string] # Base directory for results animation_template[1,type=string] # Filename template for animations control_vis_filename[1,type=string] # Filename for control visualization obstacle_distance_filename[1,type=string] # Filename for obstacle distance plot path_uncertainty_filename[1,type=string] # Filename for path uncertainty plot convergence_filename[1,type=string] # Filename for convergence plot

Connections

Model parameters

dt > A (A, B, C) > state_space_model

Agent trajectories

(state_space_model, initial_state_variance, control_variance) > agent_trajectories

Goal constraints

(agent_trajectories, goal_constraint_variance) > goal_directed_behavior

Obstacle avoidance

(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance

Collision avoidance

(agent_trajectories, nr_agents) > collision_avoidance

Complete planning system

(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system

InitialParameterization

Model parameters

dt=1.0 gamma=1.0 nr_steps=40 nr_iterations=350 nr_agents=4 softmin_temperature=10.0 intermediate_steps=10 save_intermediates=false

State space matrices

A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]

A={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}

B = [0 0; dt 0; 0 0; 0 dt]

B={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}

C = [1 0 0 0; 0 0 1 0]

C={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}

Prior distributions

initial_state_variance=100.0 control_variance=0.1 goal_constraint_variance=0.00001 gamma_shape=1.5 gamma_scale_factor=0.5

Visualization parameters

x_limits={(-20, 20)} y_limits={(-20, 20)} fps=15 heatmap_resolution=100 plot_width=800 plot_height=400 agent_alpha=1.0 target_alpha=0.2 color_palette="tab10"

Environment definitions

door_obstacle_center_1={(-40.0, 0.0)} door_obstacle_size_1={(70.0, 5.0)} door_obstacle_center_2={(40.0, 0.0)} door_obstacle_size_2={(70.0, 5.0)}

wall_obstacle_center={(0.0, 0.0)} wall_obstacle_size={(10.0, 5.0)}

combined_obstacle_center_1={(-50.0, 0.0)} combined_obstacle_size_1={(70.0, 2.0)} combined_obstacle_center_2={(50.0, 0.0)} combined_obstacle_size_2={(70.0, 2.0)} combined_obstacle_center_3={(5.0, -1.0)} combined_obstacle_size_3={(3.0, 10.0)}

Agent configurations

agent1_id=1 agent1_radius=2.5 agent1_initial_position={(-4.0, 10.0)} agent1_target_position={(-10.0, -10.0)}

agent2_id=2 agent2_radius=1.5 agent2_initial_position={(-10.0, 5.0)} agent2_target_position={(10.0, -15.0)}

agent3_id=3 agent3_radius=1.0 agent3_initial_position={(-15.0, -10.0)} agent3_target_position={(10.0, 10.0)}

agent4_id=4 agent4_radius=2.5 agent4_initial_position={(0.0, -10.0)} agent4_target_position={(-10.0, 15.0)}

Experiment configurations

experiment_seeds={(42, 123)} results_dir="results" animation_template="{environment}_{seed}.gif" control_vis_filename="control_signals.gif" obstacle_distance_filename="obstacle_distance.png" path_uncertainty_filename="path_uncertainty.png" convergence_filename="convergence.png"

Equations

State space model:

x_{t+1} = A * x_t + B * u_t + w_t, w_t ~ N(0, control_variance)

y_t = C * x_t + v_t, v_t ~ N(0, observation_variance)

Obstacle avoidance constraint:

p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)

where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle

Goal constraint:

p(x_T | goal) ~ N(goal, goal_constraint_variance)

where x_T is the final position

Collision avoidance constraint:

p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)

where x_i, x_j are positions of agents i and j, r_i, r_j are their radii

Time

Dynamic DiscreteTime ModelTimeHorizon=nr_steps

ActInfOntologyAnnotation

dt=TimeStep gamma=ConstraintParameter nr_steps=TrajectoryLength nr_iterations=InferenceIterations nr_agents=NumberOfAgents softmin_temperature=SoftminTemperature A=StateTransitionMatrix B=ControlInputMatrix C=ObservationMatrix initial_state_variance=InitialStateVariance control_variance=ControlVariance goal_constraint_variance=GoalConstraintVariance

ModelParameters

nr_agents=4 nr_steps=40 nr_iterations=350

Footer

Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl

Signature

Creator: AI Assistant for GNN Date: 2024-07-27 Status: Example for RxInfer.jl multi-agent trajectory planning \n```\n\n## Parsed Sections

_HeaderComments

# GNN Example: RxInfer Multi-agent Trajectory Planning
# Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl
# Version: 1.0
# This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.

ModelName

Multi-agent Trajectory Planning

GNNSection

RxInferMultiAgentTrajectoryPlanning

GNNVersionAndFlags

GNN v1

ModelAnnotation

This model represents a multi-agent trajectory planning scenario in RxInfer.jl.
It includes:
- State space model for agents moving in a 2D environment
- Obstacle avoidance constraints
- Goal-directed behavior
- Inter-agent collision avoidance
The model can be used to simulate trajectory planning in various environments with obstacles.

StateSpaceBlock

# Model parameters
dt[1,type=float]               # Time step for the state space model
gamma[1,type=float]            # Constraint parameter for the Halfspace node
nr_steps[1,type=int]           # Number of time steps in the trajectory
nr_iterations[1,type=int]      # Number of inference iterations
nr_agents[1,type=int]          # Number of agents in the simulation
softmin_temperature[1,type=float] # Temperature parameter for the softmin function
intermediate_steps[1,type=int] # Intermediate results saving interval
save_intermediates[1,type=bool] # Whether to save intermediate results

# State space matrices
A[4,4,type=float]              # State transition matrix
B[4,2,type=float]              # Control input matrix
C[2,4,type=float]              # Observation matrix

# Prior distributions
initial_state_variance[1,type=float]    # Prior on initial state
control_variance[1,type=float]          # Prior on control inputs
goal_constraint_variance[1,type=float]  # Goal constraints variance
gamma_shape[1,type=float]               # Parameters for GammaShapeRate prior
gamma_scale_factor[1,type=float]        # Parameters for GammaShapeRate prior

# Visualization parameters
x_limits[2,type=float]            # Plot boundaries (x-axis)
y_limits[2,type=float]            # Plot boundaries (y-axis)
fps[1,type=int]                   # Animation frames per second
heatmap_resolution[1,type=int]    # Heatmap resolution
plot_width[1,type=int]            # Plot width
plot_height[1,type=int]           # Plot height
agent_alpha[1,type=float]         # Visualization alpha for agents
target_alpha[1,type=float]        # Visualization alpha for targets
color_palette[1,type=string]      # Color palette for visualization

# Environment definitions
door_obstacle_center_1[2,type=float]    # Door environment, obstacle 1 center
door_obstacle_size_1[2,type=float]      # Door environment, obstacle 1 size
door_obstacle_center_2[2,type=float]    # Door environment, obstacle 2 center
door_obstacle_size_2[2,type=float]      # Door environment, obstacle 2 size

wall_obstacle_center[2,type=float]      # Wall environment, obstacle center
wall_obstacle_size[2,type=float]        # Wall environment, obstacle size

combined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center
combined_obstacle_size_1[2,type=float]   # Combined environment, obstacle 1 size
combined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center
combined_obstacle_size_2[2,type=float]   # Combined environment, obstacle 2 size
combined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center
combined_obstacle_size_3[2,type=float]   # Combined environment, obstacle 3 size

# Agent configurations
agent1_id[1,type=int]                   # Agent 1 ID
agent1_radius[1,type=float]             # Agent 1 radius
agent1_initial_position[2,type=float]   # Agent 1 initial position
agent1_target_position[2,type=float]    # Agent 1 target position

agent2_id[1,type=int]                   # Agent 2 ID
agent2_radius[1,type=float]             # Agent 2 radius
agent2_initial_position[2,type=float]   # Agent 2 initial position
agent2_target_position[2,type=float]    # Agent 2 target position

agent3_id[1,type=int]                   # Agent 3 ID
agent3_radius[1,type=float]             # Agent 3 radius
agent3_initial_position[2,type=float]   # Agent 3 initial position
agent3_target_position[2,type=float]    # Agent 3 target position

agent4_id[1,type=int]                   # Agent 4 ID
agent4_radius[1,type=float]             # Agent 4 radius
agent4_initial_position[2,type=float]   # Agent 4 initial position
agent4_target_position[2,type=float]    # Agent 4 target position

# Experiment configurations
experiment_seeds[2,type=int]            # Random seeds for reproducibility
results_dir[1,type=string]              # Base directory for results
animation_template[1,type=string]       # Filename template for animations
control_vis_filename[1,type=string]     # Filename for control visualization
obstacle_distance_filename[1,type=string] # Filename for obstacle distance plot
path_uncertainty_filename[1,type=string]  # Filename for path uncertainty plot
convergence_filename[1,type=string]       # Filename for convergence plot

Connections

# Model parameters
dt > A
(A, B, C) > state_space_model

# Agent trajectories
(state_space_model, initial_state_variance, control_variance) > agent_trajectories

# Goal constraints
(agent_trajectories, goal_constraint_variance) > goal_directed_behavior

# Obstacle avoidance
(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance

# Collision avoidance
(agent_trajectories, nr_agents) > collision_avoidance

# Complete planning system
(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system

InitialParameterization

# Model parameters
dt=1.0
gamma=1.0
nr_steps=40
nr_iterations=350
nr_agents=4
softmin_temperature=10.0
intermediate_steps=10
save_intermediates=false

# State space matrices
# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]
A={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}

# B = [0 0; dt 0; 0 0; 0 dt]
B={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}

# C = [1 0 0 0; 0 0 1 0]
C={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}

# Prior distributions
initial_state_variance=100.0
control_variance=0.1
goal_constraint_variance=0.00001
gamma_shape=1.5
gamma_scale_factor=0.5

# Visualization parameters
x_limits={(-20, 20)}
y_limits={(-20, 20)}
fps=15
heatmap_resolution=100
plot_width=800
plot_height=400
agent_alpha=1.0
target_alpha=0.2
color_palette="tab10"

# Environment definitions
door_obstacle_center_1={(-40.0, 0.0)}
door_obstacle_size_1={(70.0, 5.0)}
door_obstacle_center_2={(40.0, 0.0)}
door_obstacle_size_2={(70.0, 5.0)}

wall_obstacle_center={(0.0, 0.0)}
wall_obstacle_size={(10.0, 5.0)}

combined_obstacle_center_1={(-50.0, 0.0)}
combined_obstacle_size_1={(70.0, 2.0)}
combined_obstacle_center_2={(50.0, 0.0)}
combined_obstacle_size_2={(70.0, 2.0)}
combined_obstacle_center_3={(5.0, -1.0)}
combined_obstacle_size_3={(3.0, 10.0)}

# Agent configurations
agent1_id=1
agent1_radius=2.5
agent1_initial_position={(-4.0, 10.0)}
agent1_target_position={(-10.0, -10.0)}

agent2_id=2
agent2_radius=1.5
agent2_initial_position={(-10.0, 5.0)}
agent2_target_position={(10.0, -15.0)}

agent3_id=3
agent3_radius=1.0
agent3_initial_position={(-15.0, -10.0)}
agent3_target_position={(10.0, 10.0)}

agent4_id=4
agent4_radius=2.5
agent4_initial_position={(0.0, -10.0)}
agent4_target_position={(-10.0, 15.0)}

# Experiment configurations
experiment_seeds={(42, 123)}
results_dir="results"
animation_template="{environment}_{seed}.gif"
control_vis_filename="control_signals.gif"
obstacle_distance_filename="obstacle_distance.png"
path_uncertainty_filename="path_uncertainty.png"
convergence_filename="convergence.png"

Equations

# State space model:
# x_{t+1} = A * x_t + B * u_t + w_t,  w_t ~ N(0, control_variance)
# y_t = C * x_t + v_t,                v_t ~ N(0, observation_variance)
#
# Obstacle avoidance constraint:
# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)
# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle
#
# Goal constraint:
# p(x_T | goal) ~ N(goal, goal_constraint_variance)
# where x_T is the final position
#
# Collision avoidance constraint:
# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)
# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii

Time

Dynamic
DiscreteTime
ModelTimeHorizon=nr_steps

ActInfOntologyAnnotation

dt=TimeStep
gamma=ConstraintParameter
nr_steps=TrajectoryLength
nr_iterations=InferenceIterations
nr_agents=NumberOfAgents
softmin_temperature=SoftminTemperature
A=StateTransitionMatrix
B=ControlInputMatrix
C=ObservationMatrix
initial_state_variance=InitialStateVariance
control_variance=ControlVariance
goal_constraint_variance=GoalConstraintVariance

ModelParameters

nr_agents=4
nr_steps=40
nr_iterations=350

Footer

Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl

Signature

Creator: AI Assistant for GNN
Date: 2024-07-27
Status: Example for RxInfer.jl multi-agent trajectory planning

JSON Files

full_model_data.json

{
  "_HeaderComments": "# GNN Example: RxInfer Multi-agent Trajectory Planning\n# Format: Markdown representation of a Multi-agent Trajectory Planning model for RxInfer.jl\n# Version: 1.0\n# This file is machine-readable and represents the configuration for the RxInfer.jl multi-agent trajectory planning example.",
  "ModelName": "Multi-agent Trajectory Planning",
  "GNNSection": "RxInferMultiAgentTrajectoryPlanning",
  "GNNVersionAndFlags": "GNN v1",
  "ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
  "StateSpaceBlock": "# Model parameters\ndt[1,type=float]               # Time step for the state space model\ngamma[1,type=float]            # Constraint parameter for the Halfspace node\nnr_steps[1,type=int]           # Number of time steps in the trajectory\nnr_iterations[1,type=int]      # Number of inference iterations\nnr_agents[1,type=int]          # Number of agents in the simulation\nsoftmin_temperature[1,type=float] # Temperature parameter for the softmin function\nintermediate_steps[1,type=int] # Intermediate results saving interval\nsave_intermediates[1,type=bool] # Whether to save intermediate results\n\n# State space matrices\nA[4,4,type=float]              # State transition matrix\nB[4,2,type=float]              # Control input matrix\nC[2,4,type=float]              # Observation matrix\n\n# Prior distributions\ninitial_state_variance[1,type=float]    # Prior on initial state\ncontrol_variance[1,type=float]          # Prior on control inputs\ngoal_constraint_variance[1,type=float]  # Goal constraints variance\ngamma_shape[1,type=float]               # Parameters for GammaShapeRate prior\ngamma_scale_factor[1,type=float]        # Parameters for GammaShapeRate prior\n\n# Visualization parameters\nx_limits[2,type=float]            # Plot boundaries (x-axis)\ny_limits[2,type=float]            # Plot boundaries (y-axis)\nfps[1,type=int]                   # Animation frames per second\nheatmap_resolution[1,type=int]    # Heatmap resolution\nplot_width[1,type=int]            # Plot width\nplot_height[1,type=int]           # Plot height\nagent_alpha[1,type=float]         # Visualization alpha for agents\ntarget_alpha[1,type=float]        # Visualization alpha for targets\ncolor_palette[1,type=string]      # Color palette for visualization\n\n# Environment definitions\ndoor_obstacle_center_1[2,type=float]    # Door environment, obstacle 1 center\ndoor_obstacle_size_1[2,type=float]      # Door environment, obstacle 1 size\ndoor_obstacle_center_2[2,type=float]    # Door environment, obstacle 2 center\ndoor_obstacle_size_2[2,type=float]      # Door environment, obstacle 2 size\n\nwall_obstacle_center[2,type=float]      # Wall environment, obstacle center\nwall_obstacle_size[2,type=float]        # Wall environment, obstacle size\n\ncombined_obstacle_center_1[2,type=float] # Combined environment, obstacle 1 center\ncombined_obstacle_size_1[2,type=float]   # Combined environment, obstacle 1 size\ncombined_obstacle_center_2[2,type=float] # Combined environment, obstacle 2 center\ncombined_obstacle_size_2[2,type=float]   # Combined environment, obstacle 2 size\ncombined_obstacle_center_3[2,type=float] # Combined environment, obstacle 3 center\ncombined_obstacle_size_3[2,type=float]   # Combined environment, obstacle 3 size\n\n# Agent configurations\nagent1_id[1,type=int]                   # Agent 1 ID\nagent1_radius[1,type=float]             # Agent 1 radius\nagent1_initial_position[2,type=float]   # Agent 1 initial position\nagent1_target_position[2,type=float]    # Agent 1 target position\n\nagent2_id[1,type=int]                   # Agent 2 ID\nagent2_radius[1,type=float]             # Agent 2 radius\nagent2_initial_position[2,type=float]   # Agent 2 initial position\nagent2_target_position[2,type=float]    # Agent 2 target position\n\nagent3_id[1,type=int]                   # Agent 3 ID\nagent3_radius[1,type=float]             # Agent 3 radius\nagent3_initial_position[2,type=float]   # Agent 3 initial position\nagent3_target_position[2,type=float]    # Agent 3 target position\n\nagent4_id[1,type=int]                   # Agent 4 ID\nagent4_radius[1,type=float]             # Agent 4 radius\nagent4_initial_position[2,type=float]   # Agent 4 initial position\nagent4_target_position[2,type=float]    # Agent 4 target position\n\n# Experiment configurations\nexperiment_seeds[2,type=int]            # Random seeds for reproducibility\nresults_dir[1,type=string]              # Base directory for results\nanimation_template[1,type=string]       # Filename template for animations\ncontrol_vis_filename[1,type=string]     # Filename for control visualization\nobstacle_distance_filename[1,type=string] # Filename for obstacle distance plot\npath_uncertainty_filename[1,type=string]  # Filename for path uncertainty plot\nconvergence_filename[1,type=string]       # Filename for convergence plot",
  "Connections": "# Model parameters\ndt > A\n(A, B, C) > state_space_model\n\n# Agent trajectories\n(state_space_model, initial_state_variance, control_variance) > agent_trajectories\n\n# Goal constraints\n(agent_trajectories, goal_constraint_variance) > goal_directed_behavior\n\n# Obstacle avoidance\n(agent_trajectories, gamma, gamma_shape, gamma_scale_factor) > obstacle_avoidance\n\n# Collision avoidance\n(agent_trajectories, nr_agents) > collision_avoidance\n\n# Complete planning system\n(goal_directed_behavior, obstacle_avoidance, collision_avoidance) > planning_system",
  "InitialParameterization": "# Model parameters\ndt=1.0\ngamma=1.0\nnr_steps=40\nnr_iterations=350\nnr_agents=4\nsoftmin_temperature=10.0\nintermediate_steps=10\nsave_intermediates=false\n\n# State space matrices\n# A = [1 dt 0 0; 0 1 0 0; 0 0 1 dt; 0 0 0 1]\nA={(1.0, 1.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, 1.0, 1.0), (0.0, 0.0, 0.0, 1.0)}\n\n# B = [0 0; dt 0; 0 0; 0 dt]\nB={(0.0, 0.0), (1.0, 0.0), (0.0, 0.0), (0.0, 1.0)}\n\n# C = [1 0 0 0; 0 0 1 0]\nC={(1.0, 0.0, 0.0, 0.0), (0.0, 0.0, 1.0, 0.0)}\n\n# Prior distributions\ninitial_state_variance=100.0\ncontrol_variance=0.1\ngoal_constraint_variance=0.00001\ngamma_shape=1.5\ngamma_scale_factor=0.5\n\n# Visualization parameters\nx_limits={(-20, 20)}\ny_limits={(-20, 20)}\nfps=15\nheatmap_resolution=100\nplot_width=800\nplot_height=400\nagent_alpha=1.0\ntarget_alpha=0.2\ncolor_palette=\"tab10\"\n\n# Environment definitions\ndoor_obstacle_center_1={(-40.0, 0.0)}\ndoor_obstacle_size_1={(70.0, 5.0)}\ndoor_obstacle_center_2={(40.0, 0.0)}\ndoor_obstacle_size_2={(70.0, 5.0)}\n\nwall_obstacle_center={(0.0, 0.0)}\nwall_obstacle_size={(10.0, 5.0)}\n\ncombined_obstacle_center_1={(-50.0, 0.0)}\ncombined_obstacle_size_1={(70.0, 2.0)}\ncombined_obstacle_center_2={(50.0, 0.0)}\ncombined_obstacle_size_2={(70.0, 2.0)}\ncombined_obstacle_center_3={(5.0, -1.0)}\ncombined_obstacle_size_3={(3.0, 10.0)}\n\n# Agent configurations\nagent1_id=1\nagent1_radius=2.5\nagent1_initial_position={(-4.0, 10.0)}\nagent1_target_position={(-10.0, -10.0)}\n\nagent2_id=2\nagent2_radius=1.5\nagent2_initial_position={(-10.0, 5.0)}\nagent2_target_position={(10.0, -15.0)}\n\nagent3_id=3\nagent3_radius=1.0\nagent3_initial_position={(-15.0, -10.0)}\nagent3_target_position={(10.0, 10.0)}\n\nagent4_id=4\nagent4_radius=2.5\nagent4_initial_position={(0.0, -10.0)}\nagent4_target_position={(-10.0, 15.0)}\n\n# Experiment configurations\nexperiment_seeds={(42, 123)}\nresults_dir=\"results\"\nanimation_template=\"{environment}_{seed}.gif\"\ncontrol_vis_filename=\"control_signals.gif\"\nobstacle_distance_filename=\"obstacle_distance.png\"\npath_uncertainty_filename=\"path_uncertainty.png\"\nconvergence_filename=\"convergence.png\"",
  "Equations": "# State space model:\n# x_{t+1} = A * x_t + B * u_t + w_t,  w_t ~ N(0, control_variance)\n# y_t = C * x_t + v_t,                v_t ~ N(0, observation_variance)\n#\n# Obstacle avoidance constraint:\n# p(x_t | obstacle) ~ N(d(x_t, obstacle), gamma)\n# where d(x_t, obstacle) is the distance from position x_t to the nearest obstacle\n#\n# Goal constraint:\n# p(x_T | goal) ~ N(goal, goal_constraint_variance)\n# where x_T is the final position\n#\n# Collision avoidance constraint:\n# p(x_i, x_j) ~ N(||x_i - x_j|| - (r_i + r_j), gamma)\n# where x_i, x_j are positions of agents i and j, r_i, r_j are their radii",
  "Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
  "ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance",
  "ModelParameters": "nr_agents=4\nnr_steps=40\nnr_iterations=350",
  "Footer": "Multi-agent Trajectory Planning - GNN Representation for RxInfer.jl",
  "Signature": "Creator: AI Assistant for GNN\nDate: 2024-07-27\nStatus: Example for RxInfer.jl multi-agent trajectory planning"
}
full_model_data.json

model_metadata.json

{
  "ModelName": "Multi-agent Trajectory Planning",
  "ModelAnnotation": "This model represents a multi-agent trajectory planning scenario in RxInfer.jl.\nIt includes:\n- State space model for agents moving in a 2D environment\n- Obstacle avoidance constraints\n- Goal-directed behavior\n- Inter-agent collision avoidance\nThe model can be used to simulate trajectory planning in various environments with obstacles.",
  "GNNVersionAndFlags": "GNN v1",
  "Time": "Dynamic\nDiscreteTime\nModelTimeHorizon=nr_steps",
  "ActInfOntologyAnnotation": "dt=TimeStep\ngamma=ConstraintParameter\nnr_steps=TrajectoryLength\nnr_iterations=InferenceIterations\nnr_agents=NumberOfAgents\nsoftmin_temperature=SoftminTemperature\nA=StateTransitionMatrix\nB=ControlInputMatrix\nC=ObservationMatrix\ninitial_state_variance=InitialStateVariance\ncontrol_variance=ControlVariance\ngoal_constraint_variance=GoalConstraintVariance"
}
model_metadata.json

MCP Integration Report (Step 7)

🤖 MCP Integration and API Report

🗓️ Report Generated: 2025-06-06 13:08:27

MCP Core Directory: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/mcp Project Source Root (for modules): /home/trim/Documents/GitHub/GeneralizedNotationNotation/src Output Directory for this report: /home/trim/Documents/GitHub/GeneralizedNotationNotation/output/mcp_processing_step

🌐 Global Summary of Registered MCP Tools

This section lists all tools currently registered with the MCP system, along with their defining module, arguments, and description.

🔬 Core MCP File Check

This section verifies the presence of essential MCP files in the core directory: /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/mcp

Status: 5/5 core MCP files found. All core files seem present.

🧩 Functional Module MCP Integration & API Check

Checking for mcp.py in these subdirectories of /home/trim/Documents/GitHub/GeneralizedNotationNotation/src: ['export', 'gnn', 'gnn_type_checker', 'ontology', 'setup', 'tests', 'visualization', 'llm']

Module: export (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/export)


Module: gnn (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn)


Module: gnn_type_checker (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/gnn_type_checker)


Module: ontology (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/ontology)


Module: setup (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/setup)


Module: tests (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/tests)


Module: visualization (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/visualization)


Module: llm (at /home/trim/Documents/GitHub/GeneralizedNotationNotation/src/llm)


📊 Overall Module Integration Summary

Ontology Processing (Step 8)

🧬 GNN Ontological Annotations Report

📊 Summary of Ontology Processing


��️ Report Generated: 2025-06-06 13:08:28 🎯 GNN Source Directory: src/gnn/examples 📖 Ontology Terms Definition: src/ontology/act_inf_ontology_terms.json (Loaded: 48 terms)


Ontological Annotations for src/gnn/examples/pymdp_pomdp_agent.md

Mappings:

Validation Summary: All ontological terms are recognized.


Ontological Annotations for src/gnn/examples/rxinfer_multiagent_gnn.md

Mappings:

Validation Summary: 12 unrecognized ontological term(s) found.


Rendered Simulators (Step 9)

LLM Processing Outputs (Step 11)

LLM Outputs for pymdp_pomdp_agent: pymdp_pomdp_agent

JSON Files

pymdp_pomdp_agent_comprehensive_analysis.json

{
  "model_purpose": "The GNN file represents a Multifactor PyMDP agent designed for decision-making processes in environments with multiple observation modalities and hidden state factors, utilizing Active Inference principles.",
  "key_components": {
    "states": {
      "hidden_states": {
        "reward_level": {
          "num_states": 2,
          "description": "Represents the level of reward, with two possible states."
        },
        "decision_state": {
          "num_states": 3,
          "description": "Represents the current decision-making state, with three possible states."
        }
      }
    },
    "observations": {
      "state_observation": {
        "num_outcomes": 3,
        "description": "Observations related to the current state with three possible outcomes."
      },
      "reward": {
        "num_outcomes": 3,
        "description": "Observations related to the reward received with three possible outcomes."
      },
      "decision_proprioceptive": {
        "num_outcomes": 3,
        "description": "Observations related to proprioceptive feedback during decision-making with three possible outcomes."
      }
    },
    "actions": {
      "decision_state_control": {
        "num_actions": 3,
        "description": "Three possible actions that can be taken to control the decision state."
      }
    },
    "parameters": {
      "preferences": {
        "C_m0": "Preferences for modality 0.",
        "C_m1": "Preferences for modality 1.",
        "C_m2": "Preferences for modality 2."
      },
      "priors": {
        "D_f0": "Prior distribution over hidden states for factor 0.",
        "D_f1": "Prior distribution over hidden states for factor 1."
      }
    }
  },
  "component_interactions": {
    "hidden_states": {
      "D_f0 and D_f1": "These priors connect to the hidden states s_f0 and s_f1.",
      "s_f0 and s_f1": "These hidden states interact with the A_m matrices to produce observations."
    },
    "observations": {
      "A_m0, A_m1, A_m2": "Likelihood matrices for observations interact with hidden states to generate observations o_m0, o_m1, o_m2."
    },
    "actions": {
      "u_f1": "The action taken for the controllable factor affects the B_f1 transition matrix, impacting the hidden state transitions."
    },
    "expected_free_energy": {
      "G": "The expected free energy is derived from the preference vectors and is used to infer the policy \u03c0_f1."
    }
  },
  "data_types_and_dimensions": {
    "hidden_states": {
      "s_f0": {
        "type": "float",
        "dimensions": [
          2,
          1
        ]
      },
      "s_f1": {
        "type": "float",
        "dimensions": [
          3,
          1
        ]
      }
    },
    "observations": {
      "o_m0": {
        "type": "float",
        "dimensions": [
          3,
          1
        ]
      },
      "o_m1": {
        "type": "float",
        "dimensions": [
          3,
          1
        ]
      },
      "o_m2": {
        "type": "float",
        "dimensions": [
          3,
          1
        ]
      }
    },
    "matrices": {
      "A_m0": {
        "type": "float",
        "dimensions": [
          3,
          2,
          3
        ]
      },
      "A_m1": {
        "type": "float",
        "dimensions": [
          3,
          2,
          3
        ]
      },
      "A_m2": {
        "type": "float",
        "dimensions": [
          3,
          2,
          3
        ]
      },
      "B_f0": {
        "type": "float",
        "dimensions": [
          2,
          2,
          1
        ]
      },
      "B_f1": {
        "type": "float",
        "dimensions": [
          3,
          3,
          3
        ]
      }
    },
    "preferences": {
      "C_m0": {
        "type": "float",
        "dimensions": [
          3
        ]
      },
      "C_m1": {
        "type": "float",
        "dimensions": [
          3
        ]
      },
      "C_m2": {
        "type": "float",
        "dimensions": [
          3
        ]
      }
    },
    "priors": {
      "D_f0": {
        "type": "float",
        "dimensions": [
          2
        ]
      },
      "D_f1": {
        "type": "float",
        "dimensions": [
          3
        ]
      }
    }
  },
  "potential_applications": "This model can be used in scenarios requiring decision-making under uncertainty, such as robotics, adaptive learning systems, and other AI applications where agents learn from diverse observations to make informed decisions.",
  "limitations_or_ambiguities": "The model's assumptions about the independence of hidden states and observations may not hold in all environments, potentially limiting its applicability. Additionally, the details of the control mechanism for the decision state may require further specifications.",
  "ontology_mapping_assessment": "The ActInfOntology terms are present and relevant, as they map directly to the components defined in the GNN file, aiding in understanding the model's structure and interactions within an Active Inference framework."
}
pymdp_pomdp_agent_comprehensive_analysis.json

pymdp_pomdp_agent_qa.json

[
  {
    "question": "What are the implications of having multiple observation modalities in the decision-making process of the Multifactor PyMDP agent?",
    "answer": "The GNN file specifies that the Multifactor PyMDP agent incorporates multiple observation modalities\u2014specifically \"state_observation,\" \"reward,\" and \"decision_proprioceptive,\" each with three possible outcomes. The implications of having these multiple observation modalities in the decision-making process include:\n\n1. **Enhanced Information Integration**: The agent can synthesize information from various sources, allowing it to make more informed decisions by considering different aspects of the environment and its own state.\n\n2. **Robustness Against Uncertainty**: With multiple modalities, the agent can mitigate the impact of noise or uncertainty in any single observation type. This redundancy can lead to more reliable state inferences.\n\n3. **Complex Decision-Making**: The presence of multiple modalities allows the agent to capture a richer representation of its environment, facilitating more complex and adaptive decision-making strategies.\n\n4. **Diverse Action Considerations**: Each modality can influence the agent's preferences and policies differently, leading to potentially more nuanced and context-sensitive actions.\n\nOverall, the integration of multiple observation modalities enhances the agent's ability to understand and navigate its environment effectively."
  },
  {
    "question": "How does the choice of hidden state factors ('reward_level' and 'decision_state') influence the agent's ability to learn and adapt over time?",
    "answer": "The GNN file does not provide explicit information on how the choice of hidden state factors ('reward_level' and 'decision_state') influences the agent's ability to learn and adapt over time. It describes the structure and parameterization of the agent but does not elaborate on the learning dynamics or adaptation mechanisms related to these specific hidden state factors. Therefore, it is not possible to answer the question based solely on the provided GNN file content."
  },
  {
    "question": "In what ways do the transition matrices B_f0 and B_f1 reflect the agent's assumptions about the environment and control over its actions?",
    "answer": "The transition matrices B_f0 and B_f1 reflect the agent's assumptions about the environment and control over its actions in the following ways:\n\n1. **B_f0 (Factor 0: \"reward_level\")**: This matrix is defined as a 2x2 identity matrix, indicating that the transition between states is deterministic and does not depend on any actions (uncontrolled). This suggests that the agent assumes the \"reward_level\" is stable and does not change due to its actions, reflecting a lack of control over this factor.\n\n2. **B_f1 (Factor 1: \"decision_state\")**: This matrix is defined as a 3-dimensional transition matrix with 3 actions available. Each action results in the state transitioning deterministically to the next state, reflecting the agent's control over its decisions. This indicates that the agent assumes that its actions can directly influence the \"decision_state\" and that there are distinct outcomes associated with each action. \n\nIn summary, B_f0 implies a lack of control over the \"reward_level,\" while B_f1 indicates that the agent has control over the \"decision_state\" through its actions."
  },
  {
    "question": "What are the potential impacts of the defined preferences in the C_vectors on the agent's expected free energy and overall decision-making strategy?",
    "answer": "The GNN file outlines preferences in the C_vectors that represent the agent's log preferences for each observation modality. Specifically, the C_vectors influence the agent's expected free energy (G) by indicating how much the agent values certain observations over others. \n\nFor instance, C_m1 has a value of 1.0 for the first outcome and -2.0 for the second outcome, suggesting that the agent strongly prefers the first observation and significantly dislikes the second. This disparity will likely skew the agent's decision-making strategy towards actions that are more likely to produce the preferred outcome (o_m1) while avoiding those that lead to the less preferred outcome. \n\nIn summary, the defined preferences in the C_vectors directly affect the expected free energy by prioritizing certain outcomes, thereby shaping the agent's overall decision-making strategy to maximize preferred observations and minimize less desirable ones."
  },
  {
    "question": "How does the model ensure that the policies inferred from the states are optimal given the defined likelihoods and transitions?",
    "answer": "The GNN file does not provide specific details on how the model ensures that the policies inferred from the states are optimal given the defined likelihoods and transitions. It mentions standard PyMDP agent equations for state inference, policy inference, and action sampling, specifically referencing functions like `infer_states(o)`, `infer_policies()`, and `sample_action()`, but it lacks information on the optimization process or criteria used to ensure policy optimality. Therefore, it does not contain enough information to answer the question explicitly."
  }
]
pymdp_pomdp_agent_qa.json

Text/Log Files

pymdp_pomdp_agent_summary.txt

### Summary of the GNN Model: Multifactor PyMDP Agent

**Model Name:** Multifactor PyMDP Agent v1

**Purpose:** This model represents a PyMDP (Partially Observable Markov Decision Process) agent that utilizes multiple observation modalities and hidden state factors to facilitate decision-making through active inference. It is designed to capture complex interactions between observations, hidden states, and actions, making it suitable for scenarios where agents must infer their states based on varied inputs.

**Key Components:**
1. **Observation Modalities:**
   - **State Observation:** 3 outcomes
   - **Reward:** 3 outcomes
   - **Decision Proprioceptive:** 3 outcomes

2. **Hidden State Factors:**
   - **Reward Level:** 2 states
   - **Decision State:** 3 states

3. **Control Mechanism:**
   - The **decision_state** factor is controllable with 3 possible actions.

**Main Connections:**
- **Hidden States to Observations:** The model connects hidden states (s_f0 and s_f1) to the likelihood matrices (A_m0, A_m1, A_m2) that determine the observations.
- **Actions and State Transitions:** The action taken (u_f1) influences the state transitions (B_f1) specifically for the controllable factor, while another factor (B_f0) remains uncontrolled.
- **Free Energy and Policy:** The expected free energy (G) is influenced by the preferences (C_m0, C_m1, C_m2) for each modality and directly impacts the policy distribution (π_f1) for the controllable factor.

Overall, this GNN model integrates observations from different modalities, manages transitions between hidden states, and utilizes a policy for decision-making, making it a robust framework for active inference in complex environments.
pymdp_pomdp_agent_summary.txt

LLM Outputs for rxinfer_multiagent_gnn: rxinfer_multiagent_gnn

JSON Files

rxinfer_multiagent_gnn_comprehensive_analysis.json

{
  "model_purpose": "The model represents a multi-agent trajectory planning scenario designed for use with RxInfer.jl, focusing on simulating agents moving in a 2D environment while avoiding obstacles and ensuring collision avoidance among agents.",
  "key_components": {
    "states": {
      "state_space_model": "Describes the state of each agent in the environment, including positions and velocities.",
      "initial_state_variance": "Defines the uncertainty in the initial state of the agents."
    },
    "observations": {
      "observation_matrix": "Maps the state of the agents to observable outputs, capturing their positions."
    },
    "actions": {
      "control_inputs": "Controls the agents' movements through the control input matrix."
    },
    "constraints": {
      "obstacle_avoidance": "Ensures agents do not collide with obstacles in the environment.",
      "goal_constraints": "Directs agents towards their respective target positions while accounting for uncertainty.",
      "collision_avoidance": "Prevents agents from colliding with each other based on their physical radii."
    }
  },
  "component_interactions": {
    "state_space_model": {
      "inputs": [
        "dt",
        "A",
        "B",
        "C"
      ],
      "outputs": [
        "agent_trajectories"
      ]
    },
    "agent_trajectories": {
      "inputs": [
        "initial_state_variance",
        "control_variance"
      ],
      "outputs": [
        "goal_directed_behavior",
        "obstacle_avoidance",
        "collision_avoidance"
      ]
    },
    "goal_directed_behavior": {
      "inputs": [
        "agent_trajectories",
        "goal_constraint_variance"
      ],
      "outputs": [
        "planning_system"
      ]
    },
    "obstacle_avoidance": {
      "inputs": [
        "agent_trajectories",
        "gamma",
        "gamma_shape",
        "gamma_scale_factor"
      ]
    },
    "collision_avoidance": {
      "inputs": [
        "agent_trajectories",
        "nr_agents"
      ]
    }
  },
  "data_types_and_dimensions": {
    "parameters": {
      "dt": "float",
      "gamma": "float",
      "nr_steps": "int",
      "nr_iterations": "int",
      "nr_agents": "int",
      "softmin_temperature": "float",
      "intermediate_steps": "int",
      "save_intermediates": "bool"
    },
    "matrices": {
      "A": "4x4 float",
      "B": "4x2 float",
      "C": "2x4 float"
    },
    "variances": {
      "initial_state_variance": "float",
      "control_variance": "float",
      "goal_constraint_variance": "float",
      "gamma_shape": "float",
      "gamma_scale_factor": "float"
    },
    "visualization": {
      "x_limits": "2x float",
      "y_limits": "2x float",
      "fps": "int",
      "heatmap_resolution": "int",
      "plot_width": "int",
      "plot_height": "int",
      "agent_alpha": "float",
      "target_alpha": "float",
      "color_palette": "string"
    },
    "obstacles": {
      "door_obstacle_center_1": "2x float",
      "door_obstacle_size_1": "2x float",
      "wall_obstacle_center": "2x float",
      "wall_obstacle_size": "2x float"
    },
    "agents": {
      "agent_id": "int",
      "agent_radius": "float",
      "initial_position": "2x float",
      "target_position": "2x float"
    }
  },
  "potential_applications": [
    "Simulating multi-agent interactions in robotics.",
    "Developing algorithms for autonomous navigation in complex environments.",
    "Research in artificial intelligence regarding multi-agent systems and obstacle avoidance."
  ],
  "limitations_or_ambiguities": [
    "The model does not specify the exact dynamics of agents' control inputs beyond the control input matrix.",
    "Performance and efficiency in real-time applications are not addressed.",
    "Assumptions regarding agent behavior and environmental conditions may limit applicability in real-world scenarios."
  ],
  "ontology_mapping_assessment": {
    "terms_present": [
      "TimeStep",
      "ConstraintParameter",
      "TrajectoryLength",
      "InferenceIterations",
      "NumberOfAgents",
      "SoftminTemperature",
      "StateTransitionMatrix",
      "ControlInputMatrix",
      "ObservationMatrix",
      "InitialStateVariance",
      "ControlVariance",
      "GoalConstraintVariance"
    ],
    "relevance": "The terms are relevant and properly mapped to the components of the model, enhancing clarity and standardization in understanding the model's structure and parameters."
  }
}
rxinfer_multiagent_gnn_comprehensive_analysis.json

Text/Log Files

rxinfer_multiagent_gnn_summary.txt

### Summary of the GNN Model: Multi-agent Trajectory Planning

**Model Name**: Multi-agent Trajectory Planning

**Purpose**: This model is designed for simulating trajectory planning in a multi-agent environment using the RxInfer.jl framework. It aims to facilitate the movement of multiple agents in a 2D environment while accounting for obstacles and ensuring collision avoidance between agents.

**Key Components**:

1. **State Space Model**:
   - **Parameters**:
     - Time step (`dt`), constraint parameter (`gamma`), number of time steps (`nr_steps`), number of agents (`nr_agents`), and others that dictate the simulation dynamics.
   - **Matrices**:
     - State transition matrix (`A`), control input matrix (`B`), and observation matrix (`C`) are defined to model agent movements and observations.

2. **Prior Distributions**:
   - Variances for initial states, control inputs, and goal constraints are established, allowing for a probabilistic representation of the model states.

3. **Agent Configurations**:
   - Each agent (four in total) is defined by unique IDs, radii, initial positions, and target positions, allowing for individual trajectory planning.

4. **Environment Definitions**:
   - Several obstacles are defined (doors, walls, and combined obstacles) to simulate realistic scenarios affecting agent trajectories.

5. **Experiment Configurations**:
   - The model includes settings for reproducibility (random seeds) and file naming conventions for saving results and visualizations.

**Main Connections**:
- The model interlinks various components to form a cohesive planning system:
  - The state space model generates agent trajectories based on control inputs and initial states.
  - Goal-directed behavior is influenced by the trajectories and goal constraints.
  - Obstacle avoidance mechanisms are integrated using distance metrics from agent positions to obstacles.
  - Collision avoidance is implemented, ensuring that agents do not occupy the same space based on their defined radii.
  - The complete planning system synthesizes these elements, facilitating coordinated movement of multiple agents.

This GNN model effectively represents a structured approach to multi-agent trajectory planning, providing a comprehensive framework for simulating and analyzing agent dynamics in complex environments.
rxinfer_multiagent_gnn_summary.txt

Pipeline Log

Other Output Files/Directories